Dec  7 14:09:21 np0005549633 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  7 14:09:21 np0005549633 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  7 14:09:21 np0005549633 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  7 14:09:21 np0005549633 kernel: BIOS-provided physical RAM map:
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  7 14:09:21 np0005549633 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  7 14:09:21 np0005549633 kernel: NX (Execute Disable) protection: active
Dec  7 14:09:21 np0005549633 kernel: APIC: Static calls initialized
Dec  7 14:09:21 np0005549633 kernel: SMBIOS 2.8 present.
Dec  7 14:09:21 np0005549633 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  7 14:09:21 np0005549633 kernel: Hypervisor detected: KVM
Dec  7 14:09:21 np0005549633 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  7 14:09:21 np0005549633 kernel: kvm-clock: using sched offset of 3491509001 cycles
Dec  7 14:09:21 np0005549633 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  7 14:09:21 np0005549633 kernel: tsc: Detected 2799.998 MHz processor
Dec  7 14:09:21 np0005549633 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  7 14:09:21 np0005549633 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  7 14:09:21 np0005549633 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  7 14:09:21 np0005549633 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  7 14:09:21 np0005549633 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  7 14:09:21 np0005549633 kernel: Using GB pages for direct mapping
Dec  7 14:09:21 np0005549633 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  7 14:09:21 np0005549633 kernel: ACPI: Early table checksum verification disabled
Dec  7 14:09:21 np0005549633 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  7 14:09:21 np0005549633 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 14:09:21 np0005549633 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 14:09:21 np0005549633 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 14:09:21 np0005549633 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  7 14:09:21 np0005549633 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 14:09:21 np0005549633 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  7 14:09:21 np0005549633 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  7 14:09:21 np0005549633 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  7 14:09:21 np0005549633 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  7 14:09:21 np0005549633 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  7 14:09:21 np0005549633 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  7 14:09:21 np0005549633 kernel: No NUMA configuration found
Dec  7 14:09:21 np0005549633 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  7 14:09:21 np0005549633 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec  7 14:09:21 np0005549633 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  7 14:09:21 np0005549633 kernel: Zone ranges:
Dec  7 14:09:21 np0005549633 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  7 14:09:21 np0005549633 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  7 14:09:21 np0005549633 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  7 14:09:21 np0005549633 kernel:  Device   empty
Dec  7 14:09:21 np0005549633 kernel: Movable zone start for each node
Dec  7 14:09:21 np0005549633 kernel: Early memory node ranges
Dec  7 14:09:21 np0005549633 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  7 14:09:21 np0005549633 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  7 14:09:21 np0005549633 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  7 14:09:21 np0005549633 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  7 14:09:21 np0005549633 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  7 14:09:21 np0005549633 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  7 14:09:21 np0005549633 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  7 14:09:21 np0005549633 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  7 14:09:21 np0005549633 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  7 14:09:21 np0005549633 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  7 14:09:21 np0005549633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  7 14:09:21 np0005549633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  7 14:09:21 np0005549633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  7 14:09:21 np0005549633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  7 14:09:21 np0005549633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  7 14:09:21 np0005549633 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  7 14:09:21 np0005549633 kernel: TSC deadline timer available
Dec  7 14:09:21 np0005549633 kernel: CPU topo: Max. logical packages:   8
Dec  7 14:09:21 np0005549633 kernel: CPU topo: Max. logical dies:       8
Dec  7 14:09:21 np0005549633 kernel: CPU topo: Max. dies per package:   1
Dec  7 14:09:21 np0005549633 kernel: CPU topo: Max. threads per core:   1
Dec  7 14:09:21 np0005549633 kernel: CPU topo: Num. cores per package:     1
Dec  7 14:09:21 np0005549633 kernel: CPU topo: Num. threads per package:   1
Dec  7 14:09:21 np0005549633 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  7 14:09:21 np0005549633 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  7 14:09:21 np0005549633 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  7 14:09:21 np0005549633 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  7 14:09:21 np0005549633 kernel: Booting paravirtualized kernel on KVM
Dec  7 14:09:21 np0005549633 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  7 14:09:21 np0005549633 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  7 14:09:21 np0005549633 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  7 14:09:21 np0005549633 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  7 14:09:21 np0005549633 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  7 14:09:21 np0005549633 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  7 14:09:21 np0005549633 kernel: random: crng init done
Dec  7 14:09:21 np0005549633 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: Fallback order for Node 0: 0 
Dec  7 14:09:21 np0005549633 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  7 14:09:21 np0005549633 kernel: Policy zone: Normal
Dec  7 14:09:21 np0005549633 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  7 14:09:21 np0005549633 kernel: software IO TLB: area num 8.
Dec  7 14:09:21 np0005549633 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  7 14:09:21 np0005549633 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  7 14:09:21 np0005549633 kernel: ftrace: allocated 193 pages with 3 groups
Dec  7 14:09:21 np0005549633 kernel: Dynamic Preempt: voluntary
Dec  7 14:09:21 np0005549633 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  7 14:09:21 np0005549633 kernel: rcu: #011RCU event tracing is enabled.
Dec  7 14:09:21 np0005549633 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  7 14:09:21 np0005549633 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  7 14:09:21 np0005549633 kernel: #011Rude variant of Tasks RCU enabled.
Dec  7 14:09:21 np0005549633 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  7 14:09:21 np0005549633 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  7 14:09:21 np0005549633 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  7 14:09:21 np0005549633 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  7 14:09:21 np0005549633 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  7 14:09:21 np0005549633 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  7 14:09:21 np0005549633 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  7 14:09:21 np0005549633 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  7 14:09:21 np0005549633 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  7 14:09:21 np0005549633 kernel: Console: colour VGA+ 80x25
Dec  7 14:09:21 np0005549633 kernel: printk: console [ttyS0] enabled
Dec  7 14:09:21 np0005549633 kernel: ACPI: Core revision 20230331
Dec  7 14:09:21 np0005549633 kernel: APIC: Switch to symmetric I/O mode setup
Dec  7 14:09:21 np0005549633 kernel: x2apic enabled
Dec  7 14:09:21 np0005549633 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  7 14:09:21 np0005549633 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  7 14:09:21 np0005549633 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  7 14:09:21 np0005549633 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  7 14:09:21 np0005549633 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  7 14:09:21 np0005549633 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  7 14:09:21 np0005549633 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  7 14:09:21 np0005549633 kernel: Spectre V2 : Mitigation: Retpolines
Dec  7 14:09:21 np0005549633 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  7 14:09:21 np0005549633 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  7 14:09:21 np0005549633 kernel: RETBleed: Mitigation: untrained return thunk
Dec  7 14:09:21 np0005549633 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  7 14:09:21 np0005549633 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  7 14:09:21 np0005549633 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  7 14:09:21 np0005549633 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  7 14:09:21 np0005549633 kernel: x86/bugs: return thunk changed
Dec  7 14:09:21 np0005549633 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  7 14:09:21 np0005549633 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  7 14:09:21 np0005549633 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  7 14:09:21 np0005549633 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  7 14:09:21 np0005549633 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  7 14:09:21 np0005549633 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  7 14:09:21 np0005549633 kernel: Freeing SMP alternatives memory: 40K
Dec  7 14:09:21 np0005549633 kernel: pid_max: default: 32768 minimum: 301
Dec  7 14:09:21 np0005549633 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  7 14:09:21 np0005549633 kernel: landlock: Up and running.
Dec  7 14:09:21 np0005549633 kernel: Yama: becoming mindful.
Dec  7 14:09:21 np0005549633 kernel: SELinux:  Initializing.
Dec  7 14:09:21 np0005549633 kernel: LSM support for eBPF active
Dec  7 14:09:21 np0005549633 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  7 14:09:21 np0005549633 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  7 14:09:21 np0005549633 kernel: ... version:                0
Dec  7 14:09:21 np0005549633 kernel: ... bit width:              48
Dec  7 14:09:21 np0005549633 kernel: ... generic registers:      6
Dec  7 14:09:21 np0005549633 kernel: ... value mask:             0000ffffffffffff
Dec  7 14:09:21 np0005549633 kernel: ... max period:             00007fffffffffff
Dec  7 14:09:21 np0005549633 kernel: ... fixed-purpose events:   0
Dec  7 14:09:21 np0005549633 kernel: ... event mask:             000000000000003f
Dec  7 14:09:21 np0005549633 kernel: signal: max sigframe size: 1776
Dec  7 14:09:21 np0005549633 kernel: rcu: Hierarchical SRCU implementation.
Dec  7 14:09:21 np0005549633 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  7 14:09:21 np0005549633 kernel: smp: Bringing up secondary CPUs ...
Dec  7 14:09:21 np0005549633 kernel: smpboot: x86: Booting SMP configuration:
Dec  7 14:09:21 np0005549633 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  7 14:09:21 np0005549633 kernel: smp: Brought up 1 node, 8 CPUs
Dec  7 14:09:21 np0005549633 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  7 14:09:21 np0005549633 kernel: node 0 deferred pages initialised in 9ms
Dec  7 14:09:21 np0005549633 kernel: Memory: 7764000K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618212K reserved, 0K cma-reserved)
Dec  7 14:09:21 np0005549633 kernel: devtmpfs: initialized
Dec  7 14:09:21 np0005549633 kernel: x86/mm: Memory block size: 128MB
Dec  7 14:09:21 np0005549633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  7 14:09:21 np0005549633 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  7 14:09:21 np0005549633 kernel: pinctrl core: initialized pinctrl subsystem
Dec  7 14:09:21 np0005549633 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  7 14:09:21 np0005549633 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  7 14:09:21 np0005549633 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  7 14:09:21 np0005549633 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  7 14:09:21 np0005549633 kernel: audit: initializing netlink subsys (disabled)
Dec  7 14:09:21 np0005549633 kernel: audit: type=2000 audit(1765134560.126:1): state=initialized audit_enabled=0 res=1
Dec  7 14:09:21 np0005549633 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  7 14:09:21 np0005549633 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  7 14:09:21 np0005549633 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  7 14:09:21 np0005549633 kernel: cpuidle: using governor menu
Dec  7 14:09:21 np0005549633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  7 14:09:21 np0005549633 kernel: PCI: Using configuration type 1 for base access
Dec  7 14:09:21 np0005549633 kernel: PCI: Using configuration type 1 for extended access
Dec  7 14:09:21 np0005549633 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  7 14:09:21 np0005549633 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  7 14:09:21 np0005549633 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  7 14:09:21 np0005549633 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  7 14:09:21 np0005549633 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  7 14:09:21 np0005549633 kernel: Demotion targets for Node 0: null
Dec  7 14:09:21 np0005549633 kernel: cryptd: max_cpu_qlen set to 1000
Dec  7 14:09:21 np0005549633 kernel: ACPI: Added _OSI(Module Device)
Dec  7 14:09:21 np0005549633 kernel: ACPI: Added _OSI(Processor Device)
Dec  7 14:09:21 np0005549633 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  7 14:09:21 np0005549633 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  7 14:09:21 np0005549633 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  7 14:09:21 np0005549633 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  7 14:09:21 np0005549633 kernel: ACPI: Interpreter enabled
Dec  7 14:09:21 np0005549633 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  7 14:09:21 np0005549633 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  7 14:09:21 np0005549633 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  7 14:09:21 np0005549633 kernel: PCI: Using E820 reservations for host bridge windows
Dec  7 14:09:21 np0005549633 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  7 14:09:21 np0005549633 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  7 14:09:21 np0005549633 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [3] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [4] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [5] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [6] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [7] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [8] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [9] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [10] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [11] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [12] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [13] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [14] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [15] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [16] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [17] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [18] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [19] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [20] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [21] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [22] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [23] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [24] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [25] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [26] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [27] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [28] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [29] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [30] registered
Dec  7 14:09:21 np0005549633 kernel: acpiphp: Slot [31] registered
Dec  7 14:09:21 np0005549633 kernel: PCI host bridge to bus 0000:00
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  7 14:09:21 np0005549633 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  7 14:09:21 np0005549633 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  7 14:09:21 np0005549633 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  7 14:09:21 np0005549633 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  7 14:09:21 np0005549633 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  7 14:09:21 np0005549633 kernel: iommu: Default domain type: Translated
Dec  7 14:09:21 np0005549633 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  7 14:09:21 np0005549633 kernel: SCSI subsystem initialized
Dec  7 14:09:21 np0005549633 kernel: ACPI: bus type USB registered
Dec  7 14:09:21 np0005549633 kernel: usbcore: registered new interface driver usbfs
Dec  7 14:09:21 np0005549633 kernel: usbcore: registered new interface driver hub
Dec  7 14:09:21 np0005549633 kernel: usbcore: registered new device driver usb
Dec  7 14:09:21 np0005549633 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  7 14:09:21 np0005549633 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  7 14:09:21 np0005549633 kernel: PTP clock support registered
Dec  7 14:09:21 np0005549633 kernel: EDAC MC: Ver: 3.0.0
Dec  7 14:09:21 np0005549633 kernel: NetLabel: Initializing
Dec  7 14:09:21 np0005549633 kernel: NetLabel:  domain hash size = 128
Dec  7 14:09:21 np0005549633 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  7 14:09:21 np0005549633 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  7 14:09:21 np0005549633 kernel: PCI: Using ACPI for IRQ routing
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  7 14:09:21 np0005549633 kernel: vgaarb: loaded
Dec  7 14:09:21 np0005549633 kernel: clocksource: Switched to clocksource kvm-clock
Dec  7 14:09:21 np0005549633 kernel: VFS: Disk quotas dquot_6.6.0
Dec  7 14:09:21 np0005549633 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  7 14:09:21 np0005549633 kernel: pnp: PnP ACPI init
Dec  7 14:09:21 np0005549633 kernel: pnp: PnP ACPI: found 5 devices
Dec  7 14:09:21 np0005549633 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  7 14:09:21 np0005549633 kernel: NET: Registered PF_INET protocol family
Dec  7 14:09:21 np0005549633 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  7 14:09:21 np0005549633 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  7 14:09:21 np0005549633 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  7 14:09:21 np0005549633 kernel: NET: Registered PF_XDP protocol family
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  7 14:09:21 np0005549633 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  7 14:09:21 np0005549633 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  7 14:09:21 np0005549633 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74636 usecs
Dec  7 14:09:21 np0005549633 kernel: PCI: CLS 0 bytes, default 64
Dec  7 14:09:21 np0005549633 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  7 14:09:21 np0005549633 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  7 14:09:21 np0005549633 kernel: Trying to unpack rootfs image as initramfs...
Dec  7 14:09:21 np0005549633 kernel: ACPI: bus type thunderbolt registered
Dec  7 14:09:21 np0005549633 kernel: Initialise system trusted keyrings
Dec  7 14:09:21 np0005549633 kernel: Key type blacklist registered
Dec  7 14:09:21 np0005549633 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  7 14:09:21 np0005549633 kernel: zbud: loaded
Dec  7 14:09:21 np0005549633 kernel: integrity: Platform Keyring initialized
Dec  7 14:09:21 np0005549633 kernel: integrity: Machine keyring initialized
Dec  7 14:09:21 np0005549633 kernel: Freeing initrd memory: 87804K
Dec  7 14:09:21 np0005549633 kernel: NET: Registered PF_ALG protocol family
Dec  7 14:09:21 np0005549633 kernel: xor: automatically using best checksumming function   avx       
Dec  7 14:09:21 np0005549633 kernel: Key type asymmetric registered
Dec  7 14:09:21 np0005549633 kernel: Asymmetric key parser 'x509' registered
Dec  7 14:09:21 np0005549633 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  7 14:09:21 np0005549633 kernel: io scheduler mq-deadline registered
Dec  7 14:09:21 np0005549633 kernel: io scheduler kyber registered
Dec  7 14:09:21 np0005549633 kernel: io scheduler bfq registered
Dec  7 14:09:21 np0005549633 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  7 14:09:21 np0005549633 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  7 14:09:21 np0005549633 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  7 14:09:21 np0005549633 kernel: ACPI: button: Power Button [PWRF]
Dec  7 14:09:21 np0005549633 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  7 14:09:21 np0005549633 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  7 14:09:21 np0005549633 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  7 14:09:21 np0005549633 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  7 14:09:21 np0005549633 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  7 14:09:21 np0005549633 kernel: Non-volatile memory driver v1.3
Dec  7 14:09:21 np0005549633 kernel: rdac: device handler registered
Dec  7 14:09:21 np0005549633 kernel: hp_sw: device handler registered
Dec  7 14:09:21 np0005549633 kernel: emc: device handler registered
Dec  7 14:09:21 np0005549633 kernel: alua: device handler registered
Dec  7 14:09:21 np0005549633 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  7 14:09:21 np0005549633 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  7 14:09:21 np0005549633 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  7 14:09:21 np0005549633 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  7 14:09:21 np0005549633 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  7 14:09:21 np0005549633 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  7 14:09:21 np0005549633 kernel: usb usb1: Product: UHCI Host Controller
Dec  7 14:09:21 np0005549633 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  7 14:09:21 np0005549633 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  7 14:09:21 np0005549633 kernel: hub 1-0:1.0: USB hub found
Dec  7 14:09:21 np0005549633 kernel: hub 1-0:1.0: 2 ports detected
Dec  7 14:09:21 np0005549633 kernel: usbcore: registered new interface driver usbserial_generic
Dec  7 14:09:21 np0005549633 kernel: usbserial: USB Serial support registered for generic
Dec  7 14:09:21 np0005549633 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  7 14:09:21 np0005549633 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  7 14:09:21 np0005549633 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  7 14:09:21 np0005549633 kernel: mousedev: PS/2 mouse device common for all mice
Dec  7 14:09:21 np0005549633 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  7 14:09:21 np0005549633 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  7 14:09:21 np0005549633 kernel: rtc_cmos 00:04: registered as rtc0
Dec  7 14:09:21 np0005549633 kernel: rtc_cmos 00:04: setting system clock to 2025-12-07T19:09:20 UTC (1765134560)
Dec  7 14:09:21 np0005549633 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  7 14:09:21 np0005549633 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  7 14:09:21 np0005549633 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  7 14:09:21 np0005549633 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  7 14:09:21 np0005549633 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  7 14:09:21 np0005549633 kernel: usbcore: registered new interface driver usbhid
Dec  7 14:09:21 np0005549633 kernel: usbhid: USB HID core driver
Dec  7 14:09:21 np0005549633 kernel: drop_monitor: Initializing network drop monitor service
Dec  7 14:09:21 np0005549633 kernel: Initializing XFRM netlink socket
Dec  7 14:09:21 np0005549633 kernel: NET: Registered PF_INET6 protocol family
Dec  7 14:09:21 np0005549633 kernel: Segment Routing with IPv6
Dec  7 14:09:21 np0005549633 kernel: NET: Registered PF_PACKET protocol family
Dec  7 14:09:21 np0005549633 kernel: mpls_gso: MPLS GSO support
Dec  7 14:09:21 np0005549633 kernel: IPI shorthand broadcast: enabled
Dec  7 14:09:21 np0005549633 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  7 14:09:21 np0005549633 kernel: AES CTR mode by8 optimization enabled
Dec  7 14:09:21 np0005549633 kernel: sched_clock: Marking stable (1238001992, 153368859)->(1471465211, -80094360)
Dec  7 14:09:21 np0005549633 kernel: registered taskstats version 1
Dec  7 14:09:21 np0005549633 kernel: Loading compiled-in X.509 certificates
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  7 14:09:21 np0005549633 kernel: Demotion targets for Node 0: null
Dec  7 14:09:21 np0005549633 kernel: page_owner is disabled
Dec  7 14:09:21 np0005549633 kernel: Key type .fscrypt registered
Dec  7 14:09:21 np0005549633 kernel: Key type fscrypt-provisioning registered
Dec  7 14:09:21 np0005549633 kernel: Key type big_key registered
Dec  7 14:09:21 np0005549633 kernel: Key type encrypted registered
Dec  7 14:09:21 np0005549633 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  7 14:09:21 np0005549633 kernel: Loading compiled-in module X.509 certificates
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  7 14:09:21 np0005549633 kernel: ima: Allocated hash algorithm: sha256
Dec  7 14:09:21 np0005549633 kernel: ima: No architecture policies found
Dec  7 14:09:21 np0005549633 kernel: evm: Initialising EVM extended attributes:
Dec  7 14:09:21 np0005549633 kernel: evm: security.selinux
Dec  7 14:09:21 np0005549633 kernel: evm: security.SMACK64 (disabled)
Dec  7 14:09:21 np0005549633 kernel: evm: security.SMACK64EXEC (disabled)
Dec  7 14:09:21 np0005549633 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  7 14:09:21 np0005549633 kernel: evm: security.SMACK64MMAP (disabled)
Dec  7 14:09:21 np0005549633 kernel: evm: security.apparmor (disabled)
Dec  7 14:09:21 np0005549633 kernel: evm: security.ima
Dec  7 14:09:21 np0005549633 kernel: evm: security.capability
Dec  7 14:09:21 np0005549633 kernel: evm: HMAC attrs: 0x1
Dec  7 14:09:21 np0005549633 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  7 14:09:21 np0005549633 kernel: Running certificate verification RSA selftest
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  7 14:09:21 np0005549633 kernel: Running certificate verification ECDSA selftest
Dec  7 14:09:21 np0005549633 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  7 14:09:21 np0005549633 kernel: clk: Disabling unused clocks
Dec  7 14:09:21 np0005549633 kernel: Freeing unused decrypted memory: 2028K
Dec  7 14:09:21 np0005549633 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  7 14:09:21 np0005549633 kernel: Write protecting the kernel read-only data: 30720k
Dec  7 14:09:21 np0005549633 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  7 14:09:21 np0005549633 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  7 14:09:21 np0005549633 kernel: Run /init as init process
Dec  7 14:09:21 np0005549633 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  7 14:09:21 np0005549633 systemd: Detected virtualization kvm.
Dec  7 14:09:21 np0005549633 systemd: Detected architecture x86-64.
Dec  7 14:09:21 np0005549633 systemd: Running in initrd.
Dec  7 14:09:21 np0005549633 systemd: No hostname configured, using default hostname.
Dec  7 14:09:21 np0005549633 systemd: Hostname set to <localhost>.
Dec  7 14:09:21 np0005549633 systemd: Initializing machine ID from VM UUID.
Dec  7 14:09:21 np0005549633 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  7 14:09:21 np0005549633 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  7 14:09:21 np0005549633 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  7 14:09:21 np0005549633 kernel: usb 1-1: Manufacturer: QEMU
Dec  7 14:09:21 np0005549633 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  7 14:09:21 np0005549633 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  7 14:09:21 np0005549633 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  7 14:09:21 np0005549633 systemd: Queued start job for default target Initrd Default Target.
Dec  7 14:09:21 np0005549633 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  7 14:09:21 np0005549633 systemd: Reached target Local Encrypted Volumes.
Dec  7 14:09:21 np0005549633 systemd: Reached target Initrd /usr File System.
Dec  7 14:09:21 np0005549633 systemd: Reached target Local File Systems.
Dec  7 14:09:21 np0005549633 systemd: Reached target Path Units.
Dec  7 14:09:21 np0005549633 systemd: Reached target Slice Units.
Dec  7 14:09:21 np0005549633 systemd: Reached target Swaps.
Dec  7 14:09:21 np0005549633 systemd: Reached target Timer Units.
Dec  7 14:09:21 np0005549633 systemd: Listening on D-Bus System Message Bus Socket.
Dec  7 14:09:21 np0005549633 systemd: Listening on Journal Socket (/dev/log).
Dec  7 14:09:21 np0005549633 systemd: Listening on Journal Socket.
Dec  7 14:09:21 np0005549633 systemd: Listening on udev Control Socket.
Dec  7 14:09:21 np0005549633 systemd: Listening on udev Kernel Socket.
Dec  7 14:09:21 np0005549633 systemd: Reached target Socket Units.
Dec  7 14:09:21 np0005549633 systemd: Starting Create List of Static Device Nodes...
Dec  7 14:09:21 np0005549633 systemd: Starting Journal Service...
Dec  7 14:09:21 np0005549633 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  7 14:09:21 np0005549633 systemd: Starting Apply Kernel Variables...
Dec  7 14:09:21 np0005549633 systemd: Starting Create System Users...
Dec  7 14:09:21 np0005549633 systemd: Starting Setup Virtual Console...
Dec  7 14:09:21 np0005549633 systemd: Finished Create List of Static Device Nodes.
Dec  7 14:09:21 np0005549633 systemd: Finished Apply Kernel Variables.
Dec  7 14:09:21 np0005549633 systemd: Finished Create System Users.
Dec  7 14:09:21 np0005549633 systemd: Starting Create Static Device Nodes in /dev...
Dec  7 14:09:21 np0005549633 systemd-journald[305]: Journal started
Dec  7 14:09:21 np0005549633 systemd-journald[305]: Runtime Journal (/run/log/journal/8f4c7c63744e4874bc018ee0dc3af4a0) is 8.0M, max 153.6M, 145.6M free.
Dec  7 14:09:21 np0005549633 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  7 14:09:21 np0005549633 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  7 14:09:21 np0005549633 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  7 14:09:21 np0005549633 systemd: Started Journal Service.
Dec  7 14:09:21 np0005549633 systemd[1]: Starting Create Volatile Files and Directories...
Dec  7 14:09:21 np0005549633 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  7 14:09:21 np0005549633 systemd[1]: Finished Create Volatile Files and Directories.
Dec  7 14:09:21 np0005549633 systemd[1]: Finished Setup Virtual Console.
Dec  7 14:09:21 np0005549633 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  7 14:09:21 np0005549633 systemd[1]: Starting dracut cmdline hook...
Dec  7 14:09:21 np0005549633 dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Dec  7 14:09:21 np0005549633 dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  7 14:09:21 np0005549633 systemd[1]: Finished dracut cmdline hook.
Dec  7 14:09:21 np0005549633 systemd[1]: Starting dracut pre-udev hook...
Dec  7 14:09:21 np0005549633 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  7 14:09:21 np0005549633 kernel: device-mapper: uevent: version 1.0.3
Dec  7 14:09:21 np0005549633 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  7 14:09:21 np0005549633 kernel: RPC: Registered named UNIX socket transport module.
Dec  7 14:09:21 np0005549633 kernel: RPC: Registered udp transport module.
Dec  7 14:09:21 np0005549633 kernel: RPC: Registered tcp transport module.
Dec  7 14:09:21 np0005549633 kernel: RPC: Registered tcp-with-tls transport module.
Dec  7 14:09:21 np0005549633 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  7 14:09:21 np0005549633 rpc.statd[442]: Version 2.5.4 starting
Dec  7 14:09:21 np0005549633 rpc.statd[442]: Initializing NSM state
Dec  7 14:09:21 np0005549633 rpc.idmapd[447]: Setting log level to 0
Dec  7 14:09:21 np0005549633 systemd[1]: Finished dracut pre-udev hook.
Dec  7 14:09:21 np0005549633 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  7 14:09:21 np0005549633 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec  7 14:09:21 np0005549633 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  7 14:09:21 np0005549633 systemd[1]: Starting dracut pre-trigger hook...
Dec  7 14:09:21 np0005549633 systemd[1]: Finished dracut pre-trigger hook.
Dec  7 14:09:21 np0005549633 systemd[1]: Starting Coldplug All udev Devices...
Dec  7 14:09:22 np0005549633 systemd[1]: Created slice Slice /system/modprobe.
Dec  7 14:09:22 np0005549633 systemd[1]: Starting Load Kernel Module configfs...
Dec  7 14:09:22 np0005549633 systemd[1]: Finished Coldplug All udev Devices.
Dec  7 14:09:22 np0005549633 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  7 14:09:22 np0005549633 systemd[1]: Reached target Network.
Dec  7 14:09:22 np0005549633 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  7 14:09:22 np0005549633 systemd[1]: Starting dracut initqueue hook...
Dec  7 14:09:22 np0005549633 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  7 14:09:22 np0005549633 systemd[1]: Finished Load Kernel Module configfs.
Dec  7 14:09:22 np0005549633 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  7 14:09:22 np0005549633 systemd[1]: Mounting Kernel Configuration File System...
Dec  7 14:09:22 np0005549633 systemd[1]: Mounted Kernel Configuration File System.
Dec  7 14:09:22 np0005549633 systemd[1]: Reached target System Initialization.
Dec  7 14:09:22 np0005549633 systemd[1]: Reached target Basic System.
Dec  7 14:09:22 np0005549633 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  7 14:09:22 np0005549633 kernel: scsi host0: ata_piix
Dec  7 14:09:22 np0005549633 kernel: vda: vda1
Dec  7 14:09:22 np0005549633 kernel: scsi host1: ata_piix
Dec  7 14:09:22 np0005549633 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  7 14:09:22 np0005549633 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  7 14:09:22 np0005549633 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  7 14:09:22 np0005549633 systemd[1]: Reached target Initrd Root Device.
Dec  7 14:09:22 np0005549633 kernel: ata1: found unknown device (class 0)
Dec  7 14:09:22 np0005549633 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  7 14:09:22 np0005549633 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  7 14:09:22 np0005549633 systemd-udevd[499]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 14:09:22 np0005549633 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  7 14:09:22 np0005549633 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  7 14:09:22 np0005549633 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  7 14:09:22 np0005549633 systemd[1]: Finished dracut initqueue hook.
Dec  7 14:09:22 np0005549633 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  7 14:09:22 np0005549633 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  7 14:09:22 np0005549633 systemd[1]: Reached target Remote File Systems.
Dec  7 14:09:22 np0005549633 systemd[1]: Starting dracut pre-mount hook...
Dec  7 14:09:22 np0005549633 systemd[1]: Finished dracut pre-mount hook.
Dec  7 14:09:22 np0005549633 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  7 14:09:22 np0005549633 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec  7 14:09:22 np0005549633 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  7 14:09:22 np0005549633 systemd[1]: Mounting /sysroot...
Dec  7 14:09:23 np0005549633 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  7 14:09:23 np0005549633 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  7 14:09:23 np0005549633 kernel: XFS (vda1): Ending clean mount
Dec  7 14:09:23 np0005549633 systemd[1]: Mounted /sysroot.
Dec  7 14:09:23 np0005549633 systemd[1]: Reached target Initrd Root File System.
Dec  7 14:09:23 np0005549633 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  7 14:09:23 np0005549633 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  7 14:09:23 np0005549633 systemd[1]: Reached target Initrd File Systems.
Dec  7 14:09:23 np0005549633 systemd[1]: Reached target Initrd Default Target.
Dec  7 14:09:23 np0005549633 systemd[1]: Starting dracut mount hook...
Dec  7 14:09:23 np0005549633 systemd[1]: Finished dracut mount hook.
Dec  7 14:09:23 np0005549633 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  7 14:09:23 np0005549633 rpc.idmapd[447]: exiting on signal 15
Dec  7 14:09:23 np0005549633 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  7 14:09:23 np0005549633 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Network.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Timer Units.
Dec  7 14:09:23 np0005549633 systemd[1]: dbus.socket: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  7 14:09:23 np0005549633 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Initrd Default Target.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Basic System.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Initrd Root Device.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Initrd /usr File System.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Path Units.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Remote File Systems.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Slice Units.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Socket Units.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target System Initialization.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Local File Systems.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Swaps.
Dec  7 14:09:23 np0005549633 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped dracut mount hook.
Dec  7 14:09:23 np0005549633 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped dracut pre-mount hook.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  7 14:09:23 np0005549633 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped dracut initqueue hook.
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped Apply Kernel Variables.
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped Coldplug All udev Devices.
Dec  7 14:09:23 np0005549633 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped dracut pre-trigger hook.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped Setup Virtual Console.
Dec  7 14:09:23 np0005549633 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-udevd.service: Consumed 1.022s CPU time.
Dec  7 14:09:23 np0005549633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  7 14:09:23 np0005549633 systemd[1]: Closed udev Control Socket.
Dec  7 14:09:24 np0005549633 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Closed udev Kernel Socket.
Dec  7 14:09:24 np0005549633 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Stopped dracut pre-udev hook.
Dec  7 14:09:24 np0005549633 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Stopped dracut cmdline hook.
Dec  7 14:09:24 np0005549633 systemd[1]: Starting Cleanup udev Database...
Dec  7 14:09:24 np0005549633 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  7 14:09:24 np0005549633 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  7 14:09:24 np0005549633 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Stopped Create System Users.
Dec  7 14:09:24 np0005549633 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  7 14:09:24 np0005549633 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Cleanup udev Database.
Dec  7 14:09:24 np0005549633 systemd[1]: Reached target Switch Root.
Dec  7 14:09:24 np0005549633 systemd[1]: Starting Switch Root...
Dec  7 14:09:24 np0005549633 systemd[1]: Switching root.
Dec  7 14:09:24 np0005549633 systemd-journald[305]: Journal stopped
Dec  7 14:09:24 np0005549633 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  7 14:09:24 np0005549633 kernel: audit: type=1404 audit(1765134564.177:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  7 14:09:24 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:09:24 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:09:24 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:09:24 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:09:24 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:09:24 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:09:24 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:09:24 np0005549633 kernel: audit: type=1403 audit(1765134564.296:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  7 14:09:24 np0005549633 systemd: Successfully loaded SELinux policy in 122.693ms.
Dec  7 14:09:24 np0005549633 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.729ms.
Dec  7 14:09:24 np0005549633 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  7 14:09:24 np0005549633 systemd: Detected virtualization kvm.
Dec  7 14:09:24 np0005549633 systemd: Detected architecture x86-64.
Dec  7 14:09:24 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:09:24 np0005549633 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd: Stopped Switch Root.
Dec  7 14:09:24 np0005549633 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  7 14:09:24 np0005549633 systemd: Created slice Slice /system/getty.
Dec  7 14:09:24 np0005549633 systemd: Created slice Slice /system/serial-getty.
Dec  7 14:09:24 np0005549633 systemd: Created slice Slice /system/sshd-keygen.
Dec  7 14:09:24 np0005549633 systemd: Created slice User and Session Slice.
Dec  7 14:09:24 np0005549633 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  7 14:09:24 np0005549633 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  7 14:09:24 np0005549633 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  7 14:09:24 np0005549633 systemd: Reached target Local Encrypted Volumes.
Dec  7 14:09:24 np0005549633 systemd: Stopped target Switch Root.
Dec  7 14:09:24 np0005549633 systemd: Stopped target Initrd File Systems.
Dec  7 14:09:24 np0005549633 systemd: Stopped target Initrd Root File System.
Dec  7 14:09:24 np0005549633 systemd: Reached target Local Integrity Protected Volumes.
Dec  7 14:09:24 np0005549633 systemd: Reached target Path Units.
Dec  7 14:09:24 np0005549633 systemd: Reached target rpc_pipefs.target.
Dec  7 14:09:24 np0005549633 systemd: Reached target Slice Units.
Dec  7 14:09:24 np0005549633 systemd: Reached target Swaps.
Dec  7 14:09:24 np0005549633 systemd: Reached target Local Verity Protected Volumes.
Dec  7 14:09:24 np0005549633 systemd: Listening on RPCbind Server Activation Socket.
Dec  7 14:09:24 np0005549633 systemd: Reached target RPC Port Mapper.
Dec  7 14:09:24 np0005549633 systemd: Listening on Process Core Dump Socket.
Dec  7 14:09:24 np0005549633 systemd: Listening on initctl Compatibility Named Pipe.
Dec  7 14:09:24 np0005549633 systemd: Listening on udev Control Socket.
Dec  7 14:09:24 np0005549633 systemd: Listening on udev Kernel Socket.
Dec  7 14:09:24 np0005549633 systemd: Mounting Huge Pages File System...
Dec  7 14:09:24 np0005549633 systemd: Mounting POSIX Message Queue File System...
Dec  7 14:09:24 np0005549633 systemd: Mounting Kernel Debug File System...
Dec  7 14:09:24 np0005549633 systemd: Mounting Kernel Trace File System...
Dec  7 14:09:24 np0005549633 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  7 14:09:24 np0005549633 systemd: Starting Create List of Static Device Nodes...
Dec  7 14:09:24 np0005549633 systemd: Starting Load Kernel Module configfs...
Dec  7 14:09:24 np0005549633 systemd: Starting Load Kernel Module drm...
Dec  7 14:09:24 np0005549633 systemd: Starting Load Kernel Module efi_pstore...
Dec  7 14:09:24 np0005549633 systemd: Starting Load Kernel Module fuse...
Dec  7 14:09:24 np0005549633 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  7 14:09:24 np0005549633 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd: Stopped File System Check on Root Device.
Dec  7 14:09:24 np0005549633 systemd: Stopped Journal Service.
Dec  7 14:09:24 np0005549633 systemd: Starting Journal Service...
Dec  7 14:09:24 np0005549633 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  7 14:09:24 np0005549633 systemd: Starting Generate network units from Kernel command line...
Dec  7 14:09:24 np0005549633 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  7 14:09:24 np0005549633 systemd: Starting Remount Root and Kernel File Systems...
Dec  7 14:09:24 np0005549633 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  7 14:09:24 np0005549633 systemd: Starting Apply Kernel Variables...
Dec  7 14:09:24 np0005549633 systemd: Starting Coldplug All udev Devices...
Dec  7 14:09:24 np0005549633 kernel: fuse: init (API version 7.37)
Dec  7 14:09:24 np0005549633 systemd: Mounted Huge Pages File System.
Dec  7 14:09:24 np0005549633 systemd: Mounted POSIX Message Queue File System.
Dec  7 14:09:24 np0005549633 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  7 14:09:24 np0005549633 systemd: Mounted Kernel Debug File System.
Dec  7 14:09:24 np0005549633 systemd: Mounted Kernel Trace File System.
Dec  7 14:09:24 np0005549633 systemd-journald[682]: Journal started
Dec  7 14:09:24 np0005549633 systemd-journald[682]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  7 14:09:24 np0005549633 systemd[1]: Queued start job for default target Multi-User System.
Dec  7 14:09:24 np0005549633 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd: Started Journal Service.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Create List of Static Device Nodes.
Dec  7 14:09:24 np0005549633 kernel: ACPI: bus type drm_connector registered
Dec  7 14:09:24 np0005549633 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Load Kernel Module configfs.
Dec  7 14:09:24 np0005549633 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Load Kernel Module drm.
Dec  7 14:09:24 np0005549633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  7 14:09:24 np0005549633 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Load Kernel Module fuse.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Generate network units from Kernel command line.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  7 14:09:24 np0005549633 systemd[1]: Mounting FUSE Control File System...
Dec  7 14:09:24 np0005549633 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  7 14:09:24 np0005549633 systemd[1]: Starting Rebuild Hardware Database...
Dec  7 14:09:24 np0005549633 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  7 14:09:24 np0005549633 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  7 14:09:24 np0005549633 systemd[1]: Starting Load/Save OS Random Seed...
Dec  7 14:09:24 np0005549633 systemd[1]: Starting Create System Users...
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Apply Kernel Variables.
Dec  7 14:09:24 np0005549633 systemd-journald[682]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  7 14:09:24 np0005549633 systemd-journald[682]: Received client request to flush runtime journal.
Dec  7 14:09:24 np0005549633 systemd[1]: Mounted FUSE Control File System.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Load/Save OS Random Seed.
Dec  7 14:09:24 np0005549633 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Create System Users.
Dec  7 14:09:24 np0005549633 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  7 14:09:24 np0005549633 systemd[1]: Finished Coldplug All udev Devices.
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target Preparation for Local File Systems.
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target Local File Systems.
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  7 14:09:25 np0005549633 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  7 14:09:25 np0005549633 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  7 14:09:25 np0005549633 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Automatic Boot Loader Update...
Dec  7 14:09:25 np0005549633 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Create Volatile Files and Directories...
Dec  7 14:09:25 np0005549633 bootctl[698]: Couldn't find EFI system partition, skipping.
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Automatic Boot Loader Update.
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Create Volatile Files and Directories.
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Security Auditing Service...
Dec  7 14:09:25 np0005549633 systemd[1]: Starting RPC Bind...
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Rebuild Journal Catalog...
Dec  7 14:09:25 np0005549633 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  7 14:09:25 np0005549633 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Rebuild Journal Catalog.
Dec  7 14:09:25 np0005549633 augenrules[709]: /sbin/augenrules: No change
Dec  7 14:09:25 np0005549633 systemd[1]: Started RPC Bind.
Dec  7 14:09:25 np0005549633 augenrules[724]: No rules
Dec  7 14:09:25 np0005549633 augenrules[724]: enabled 1
Dec  7 14:09:25 np0005549633 augenrules[724]: failure 1
Dec  7 14:09:25 np0005549633 augenrules[724]: pid 703
Dec  7 14:09:25 np0005549633 augenrules[724]: rate_limit 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_limit 8192
Dec  7 14:09:25 np0005549633 augenrules[724]: lost 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_wait_time 60000
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_wait_time_actual 0
Dec  7 14:09:25 np0005549633 augenrules[724]: enabled 1
Dec  7 14:09:25 np0005549633 augenrules[724]: failure 1
Dec  7 14:09:25 np0005549633 augenrules[724]: pid 703
Dec  7 14:09:25 np0005549633 augenrules[724]: rate_limit 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_limit 8192
Dec  7 14:09:25 np0005549633 augenrules[724]: lost 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_wait_time 60000
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_wait_time_actual 0
Dec  7 14:09:25 np0005549633 augenrules[724]: enabled 1
Dec  7 14:09:25 np0005549633 augenrules[724]: failure 1
Dec  7 14:09:25 np0005549633 augenrules[724]: pid 703
Dec  7 14:09:25 np0005549633 augenrules[724]: rate_limit 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_limit 8192
Dec  7 14:09:25 np0005549633 augenrules[724]: lost 0
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog 4
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_wait_time 60000
Dec  7 14:09:25 np0005549633 augenrules[724]: backlog_wait_time_actual 0
Dec  7 14:09:25 np0005549633 systemd[1]: Started Security Auditing Service.
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Rebuild Hardware Database.
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Update is Completed...
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Update is Completed.
Dec  7 14:09:25 np0005549633 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Dec  7 14:09:25 np0005549633 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target System Initialization.
Dec  7 14:09:25 np0005549633 systemd[1]: Started dnf makecache --timer.
Dec  7 14:09:25 np0005549633 systemd[1]: Started Daily rotation of log files.
Dec  7 14:09:25 np0005549633 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target Timer Units.
Dec  7 14:09:25 np0005549633 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  7 14:09:25 np0005549633 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target Socket Units.
Dec  7 14:09:25 np0005549633 systemd[1]: Starting D-Bus System Message Bus...
Dec  7 14:09:25 np0005549633 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  7 14:09:25 np0005549633 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 14:09:25 np0005549633 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Load Kernel Module configfs...
Dec  7 14:09:25 np0005549633 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Load Kernel Module configfs.
Dec  7 14:09:25 np0005549633 systemd[1]: Started D-Bus System Message Bus.
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target Basic System.
Dec  7 14:09:25 np0005549633 dbus-broker-lau[770]: Ready
Dec  7 14:09:25 np0005549633 systemd[1]: Starting NTP client/server...
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  7 14:09:25 np0005549633 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  7 14:09:25 np0005549633 systemd[1]: Starting IPv4 firewall with iptables...
Dec  7 14:09:25 np0005549633 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  7 14:09:25 np0005549633 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  7 14:09:25 np0005549633 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  7 14:09:25 np0005549633 systemd[1]: Started irqbalance daemon.
Dec  7 14:09:25 np0005549633 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  7 14:09:25 np0005549633 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 14:09:25 np0005549633 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 14:09:25 np0005549633 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target sshd-keygen.target.
Dec  7 14:09:25 np0005549633 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  7 14:09:25 np0005549633 systemd[1]: Reached target User and Group Name Lookups.
Dec  7 14:09:25 np0005549633 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  7 14:09:25 np0005549633 chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  7 14:09:25 np0005549633 chronyd[795]: Loaded 0 symmetric keys
Dec  7 14:09:25 np0005549633 chronyd[795]: Using right/UTC timezone to obtain leap second data
Dec  7 14:09:25 np0005549633 chronyd[795]: Loaded seccomp filter (level 2)
Dec  7 14:09:25 np0005549633 systemd[1]: Starting User Login Management...
Dec  7 14:09:25 np0005549633 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  7 14:09:25 np0005549633 systemd[1]: Started NTP client/server.
Dec  7 14:09:25 np0005549633 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  7 14:09:25 np0005549633 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  7 14:09:25 np0005549633 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  7 14:09:25 np0005549633 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  7 14:09:25 np0005549633 kernel: Console: switching to colour dummy device 80x25
Dec  7 14:09:25 np0005549633 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  7 14:09:25 np0005549633 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  7 14:09:25 np0005549633 systemd-logind[797]: New seat seat0.
Dec  7 14:09:25 np0005549633 systemd[1]: Started User Login Management.
Dec  7 14:09:25 np0005549633 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  7 14:09:25 np0005549633 kernel: [drm] features: -context_init
Dec  7 14:09:26 np0005549633 kernel: [drm] number of scanouts: 1
Dec  7 14:09:26 np0005549633 kernel: [drm] number of cap sets: 0
Dec  7 14:09:26 np0005549633 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  7 14:09:26 np0005549633 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  7 14:09:26 np0005549633 kernel: Console: switching to colour frame buffer device 128x48
Dec  7 14:09:26 np0005549633 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  7 14:09:26 np0005549633 kernel: kvm_amd: TSC scaling supported
Dec  7 14:09:26 np0005549633 kernel: kvm_amd: Nested Virtualization enabled
Dec  7 14:09:26 np0005549633 kernel: kvm_amd: Nested Paging enabled
Dec  7 14:09:26 np0005549633 kernel: kvm_amd: LBR virtualization supported
Dec  7 14:09:26 np0005549633 iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Dec  7 14:09:26 np0005549633 systemd[1]: Finished IPv4 firewall with iptables.
Dec  7 14:09:26 np0005549633 cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sun, 07 Dec 2025 19:09:26 +0000. Up 6.88 seconds.
Dec  7 14:09:26 np0005549633 systemd[1]: run-cloud\x2dinit-tmp-tmpa6caaizo.mount: Deactivated successfully.
Dec  7 14:09:26 np0005549633 systemd[1]: Starting Hostname Service...
Dec  7 14:09:26 np0005549633 systemd[1]: Started Hostname Service.
Dec  7 14:09:26 np0005549633 systemd-hostnamed[855]: Hostname set to <np0005549633.novalocal> (static)
Dec  7 14:09:26 np0005549633 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  7 14:09:26 np0005549633 systemd[1]: Reached target Preparation for Network.
Dec  7 14:09:26 np0005549633 systemd[1]: Starting Network Manager...
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7648] NetworkManager (version 1.54.1-1.el9) is starting... (boot:6e15a106-c59c-4f7b-87e9-49ee1e7fa39f)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7654] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7763] manager[0x559fa3b0f080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7815] hostname: hostname: using hostnamed
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7816] hostname: static hostname changed from (none) to "np0005549633.novalocal"
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7822] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7957] manager[0x559fa3b0f080]: rfkill: Wi-Fi hardware radio set enabled
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.7958] manager[0x559fa3b0f080]: rfkill: WWAN hardware radio set enabled
Dec  7 14:09:26 np0005549633 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8061] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8062] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8063] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8064] manager: Networking is enabled by state file
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8069] settings: Loaded settings plugin: keyfile (internal)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8086] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8125] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8146] dhcp: init: Using DHCP client 'internal'
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8151] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8173] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8186] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8200] device (lo): Activation: starting connection 'lo' (488826b8-286b-4022-bb2f-8a62b46cf9ae)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8218] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8223] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8271] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8278] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8282] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8286] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8288] device (eth0): carrier: link connected
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8294] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  7 14:09:26 np0005549633 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8311] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8328] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8336] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8337] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8339] manager: NetworkManager state is now CONNECTING
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8342] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:09:26 np0005549633 systemd[1]: Started Network Manager.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8353] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8357] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:09:26 np0005549633 systemd[1]: Reached target Network.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8416] dhcp4 (eth0): state changed new lease, address=38.102.83.53
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8425] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  7 14:09:26 np0005549633 systemd[1]: Starting Network Manager Wait Online...
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8449] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:09:26 np0005549633 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  7 14:09:26 np0005549633 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 14:09:26 np0005549633 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8687] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8689] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8690] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8695] device (lo): Activation: successful, device activated.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8701] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:09:26 np0005549633 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8703] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 14:09:26 np0005549633 systemd[1]: Reached target NFS client services.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8705] device (eth0): Activation: successful, device activated.
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8710] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  7 14:09:26 np0005549633 NetworkManager[859]: <info>  [1765134566.8712] manager: startup complete
Dec  7 14:09:26 np0005549633 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  7 14:09:26 np0005549633 systemd[1]: Reached target Remote File Systems.
Dec  7 14:09:26 np0005549633 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  7 14:09:26 np0005549633 systemd[1]: Finished Network Manager Wait Online.
Dec  7 14:09:26 np0005549633 systemd[1]: Starting Cloud-init: Network Stage...
Dec  7 14:09:27 np0005549633 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Sun, 07 Dec 2025 19:09:27 +0000. Up 7.89 seconds.
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.53         | 255.255.255.0 | global | fa:16:3e:ee:5c:9f |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:feee:5c9f/64 |       .       |  link  | fa:16:3e:ee:5c:9f |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  7 14:09:27 np0005549633 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  7 14:09:28 np0005549633 cloud-init[922]: Generating public/private rsa key pair.
Dec  7 14:09:28 np0005549633 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  7 14:09:28 np0005549633 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  7 14:09:28 np0005549633 cloud-init[922]: The key fingerprint is:
Dec  7 14:09:28 np0005549633 cloud-init[922]: SHA256:eBgo7m7ohJQFy5aJQtgSh9jARVOg/ecMbAssATqH5ZQ root@np0005549633.novalocal
Dec  7 14:09:28 np0005549633 cloud-init[922]: The key's randomart image is:
Dec  7 14:09:28 np0005549633 cloud-init[922]: +---[RSA 3072]----+
Dec  7 14:09:28 np0005549633 cloud-init[922]: |BO**o.           |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |@BE ..           |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |*X+o. .          |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |o+=.o  +         |
Dec  7 14:09:28 np0005549633 cloud-init[922]: | +.o =o.S        |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |o.. o *.         |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |.o.  . o         |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |o..              |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |.o.              |
Dec  7 14:09:28 np0005549633 cloud-init[922]: +----[SHA256]-----+
Dec  7 14:09:28 np0005549633 cloud-init[922]: Generating public/private ecdsa key pair.
Dec  7 14:09:28 np0005549633 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  7 14:09:28 np0005549633 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  7 14:09:28 np0005549633 cloud-init[922]: The key fingerprint is:
Dec  7 14:09:28 np0005549633 cloud-init[922]: SHA256:vjZOvEA4DzORUnvELDjTBDAgXXDsWWAEicQKM+kclvg root@np0005549633.novalocal
Dec  7 14:09:28 np0005549633 cloud-init[922]: The key's randomart image is:
Dec  7 14:09:28 np0005549633 cloud-init[922]: +---[ECDSA 256]---+
Dec  7 14:09:28 np0005549633 cloud-init[922]: |X*=%O+.          |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |O**++++          |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |*+o+++.          |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |.oE.o+           |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |    * . S        |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |     B o         |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |      o +        |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |       ooo       |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |       o+.       |
Dec  7 14:09:28 np0005549633 cloud-init[922]: +----[SHA256]-----+
Dec  7 14:09:28 np0005549633 cloud-init[922]: Generating public/private ed25519 key pair.
Dec  7 14:09:28 np0005549633 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  7 14:09:28 np0005549633 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  7 14:09:28 np0005549633 cloud-init[922]: The key fingerprint is:
Dec  7 14:09:28 np0005549633 cloud-init[922]: SHA256:6NNxfYLil0EUccuxEi2310l64wxQnXouGbDugmTgW5c root@np0005549633.novalocal
Dec  7 14:09:28 np0005549633 cloud-init[922]: The key's randomart image is:
Dec  7 14:09:28 np0005549633 cloud-init[922]: +--[ED25519 256]--+
Dec  7 14:09:28 np0005549633 cloud-init[922]: |          =++.. .|
Dec  7 14:09:28 np0005549633 cloud-init[922]: |         ..*o+ + |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |          ooB.+..|
Dec  7 14:09:28 np0005549633 cloud-init[922]: |     . . . =.=.=.|
Dec  7 14:09:28 np0005549633 cloud-init[922]: |    . o S * o.@ .|
Dec  7 14:09:28 np0005549633 cloud-init[922]: |     o * E + = + |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |      B = +   .  |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |     . o o .     |
Dec  7 14:09:28 np0005549633 cloud-init[922]: |          .      |
Dec  7 14:09:28 np0005549633 cloud-init[922]: +----[SHA256]-----+
Dec  7 14:09:28 np0005549633 systemd[1]: Finished Cloud-init: Network Stage.
Dec  7 14:09:28 np0005549633 systemd[1]: Reached target Cloud-config availability.
Dec  7 14:09:28 np0005549633 systemd[1]: Reached target Network is Online.
Dec  7 14:09:28 np0005549633 systemd[1]: Starting Cloud-init: Config Stage...
Dec  7 14:09:28 np0005549633 systemd[1]: Starting Crash recovery kernel arming...
Dec  7 14:09:28 np0005549633 systemd[1]: Starting Notify NFS peers of a restart...
Dec  7 14:09:28 np0005549633 systemd[1]: Starting System Logging Service...
Dec  7 14:09:28 np0005549633 systemd[1]: Starting OpenSSH server daemon...
Dec  7 14:09:28 np0005549633 sm-notify[1004]: Version 2.5.4 starting
Dec  7 14:09:28 np0005549633 systemd[1]: Starting Permit User Sessions...
Dec  7 14:09:28 np0005549633 systemd[1]: Started Notify NFS peers of a restart.
Dec  7 14:09:28 np0005549633 systemd[1]: Started OpenSSH server daemon.
Dec  7 14:09:28 np0005549633 systemd[1]: Finished Permit User Sessions.
Dec  7 14:09:28 np0005549633 systemd[1]: Started Command Scheduler.
Dec  7 14:09:28 np0005549633 systemd[1]: Started Getty on tty1.
Dec  7 14:09:28 np0005549633 systemd[1]: Started Serial Getty on ttyS0.
Dec  7 14:09:28 np0005549633 systemd[1]: Reached target Login Prompts.
Dec  7 14:09:28 np0005549633 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Dec  7 14:09:28 np0005549633 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  7 14:09:28 np0005549633 systemd[1]: Started System Logging Service.
Dec  7 14:09:28 np0005549633 systemd[1]: Reached target Multi-User System.
Dec  7 14:09:28 np0005549633 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  7 14:09:28 np0005549633 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  7 14:09:28 np0005549633 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  7 14:09:28 np0005549633 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 14:09:28 np0005549633 kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Dec  7 14:09:28 np0005549633 kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  7 14:09:28 np0005549633 cloud-init[1094]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sun, 07 Dec 2025 19:09:28 +0000. Up 9.57 seconds.
Dec  7 14:09:28 np0005549633 systemd[1]: Finished Cloud-init: Config Stage.
Dec  7 14:09:29 np0005549633 systemd[1]: Starting Cloud-init: Final Stage...
Dec  7 14:09:29 np0005549633 cloud-init[1284]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sun, 07 Dec 2025 19:09:29 +0000. Up 10.02 seconds.
Dec  7 14:09:29 np0005549633 dracut[1286]: dracut-057-102.git20250818.el9
Dec  7 14:09:29 np0005549633 cloud-init[1303]: #############################################################
Dec  7 14:09:29 np0005549633 cloud-init[1304]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  7 14:09:29 np0005549633 cloud-init[1306]: 256 SHA256:vjZOvEA4DzORUnvELDjTBDAgXXDsWWAEicQKM+kclvg root@np0005549633.novalocal (ECDSA)
Dec  7 14:09:29 np0005549633 cloud-init[1308]: 256 SHA256:6NNxfYLil0EUccuxEi2310l64wxQnXouGbDugmTgW5c root@np0005549633.novalocal (ED25519)
Dec  7 14:09:29 np0005549633 cloud-init[1310]: 3072 SHA256:eBgo7m7ohJQFy5aJQtgSh9jARVOg/ecMbAssATqH5ZQ root@np0005549633.novalocal (RSA)
Dec  7 14:09:29 np0005549633 cloud-init[1311]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  7 14:09:29 np0005549633 cloud-init[1312]: #############################################################
Dec  7 14:09:29 np0005549633 cloud-init[1284]: Cloud-init v. 24.4-7.el9 finished at Sun, 07 Dec 2025 19:09:29 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.22 seconds
Dec  7 14:09:29 np0005549633 dracut[1288]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  7 14:09:29 np0005549633 systemd[1]: Finished Cloud-init: Final Stage.
Dec  7 14:09:29 np0005549633 systemd[1]: Reached target Cloud-init target.
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: memstrack is not available
Dec  7 14:09:30 np0005549633 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  7 14:09:30 np0005549633 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  7 14:09:31 np0005549633 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  7 14:09:31 np0005549633 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  7 14:09:31 np0005549633 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  7 14:09:31 np0005549633 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  7 14:09:31 np0005549633 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  7 14:09:31 np0005549633 dracut[1288]: memstrack is not available
Dec  7 14:09:31 np0005549633 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  7 14:09:31 np0005549633 dracut[1288]: *** Including module: systemd ***
Dec  7 14:09:31 np0005549633 dracut[1288]: *** Including module: fips ***
Dec  7 14:09:31 np0005549633 chronyd[795]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Dec  7 14:09:31 np0005549633 chronyd[795]: System clock TAI offset set to 37 seconds
Dec  7 14:09:32 np0005549633 dracut[1288]: *** Including module: systemd-initrd ***
Dec  7 14:09:32 np0005549633 dracut[1288]: *** Including module: i18n ***
Dec  7 14:09:32 np0005549633 dracut[1288]: *** Including module: drm ***
Dec  7 14:09:32 np0005549633 dracut[1288]: *** Including module: prefixdevname ***
Dec  7 14:09:32 np0005549633 dracut[1288]: *** Including module: kernel-modules ***
Dec  7 14:09:32 np0005549633 kernel: block vda: the capability attribute has been deprecated.
Dec  7 14:09:33 np0005549633 dracut[1288]: *** Including module: kernel-modules-extra ***
Dec  7 14:09:33 np0005549633 dracut[1288]: *** Including module: qemu ***
Dec  7 14:09:33 np0005549633 dracut[1288]: *** Including module: fstab-sys ***
Dec  7 14:09:33 np0005549633 dracut[1288]: *** Including module: rootfs-block ***
Dec  7 14:09:33 np0005549633 dracut[1288]: *** Including module: terminfo ***
Dec  7 14:09:33 np0005549633 dracut[1288]: *** Including module: udev-rules ***
Dec  7 14:09:34 np0005549633 dracut[1288]: Skipping udev rule: 91-permissions.rules
Dec  7 14:09:34 np0005549633 dracut[1288]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  7 14:09:34 np0005549633 dracut[1288]: *** Including module: virtiofs ***
Dec  7 14:09:34 np0005549633 dracut[1288]: *** Including module: dracut-systemd ***
Dec  7 14:09:34 np0005549633 dracut[1288]: *** Including module: usrmount ***
Dec  7 14:09:34 np0005549633 dracut[1288]: *** Including module: base ***
Dec  7 14:09:34 np0005549633 dracut[1288]: *** Including module: fs-lib ***
Dec  7 14:09:35 np0005549633 dracut[1288]: *** Including module: kdumpbase ***
Dec  7 14:09:35 np0005549633 dracut[1288]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  7 14:09:35 np0005549633 dracut[1288]:  microcode_ctl module: mangling fw_dir
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel" is ignored
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  7 14:09:35 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  7 14:09:36 np0005549633 dracut[1288]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  7 14:09:36 np0005549633 dracut[1288]: *** Including module: openssl ***
Dec  7 14:09:36 np0005549633 dracut[1288]: *** Including module: shutdown ***
Dec  7 14:09:36 np0005549633 irqbalance[791]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  7 14:09:36 np0005549633 irqbalance[791]: IRQ 25 affinity is now unmanaged
Dec  7 14:09:36 np0005549633 irqbalance[791]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  7 14:09:36 np0005549633 irqbalance[791]: IRQ 31 affinity is now unmanaged
Dec  7 14:09:36 np0005549633 irqbalance[791]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  7 14:09:36 np0005549633 irqbalance[791]: IRQ 28 affinity is now unmanaged
Dec  7 14:09:36 np0005549633 irqbalance[791]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  7 14:09:36 np0005549633 irqbalance[791]: IRQ 32 affinity is now unmanaged
Dec  7 14:09:36 np0005549633 irqbalance[791]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  7 14:09:36 np0005549633 irqbalance[791]: IRQ 30 affinity is now unmanaged
Dec  7 14:09:36 np0005549633 irqbalance[791]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  7 14:09:36 np0005549633 irqbalance[791]: IRQ 29 affinity is now unmanaged
Dec  7 14:09:36 np0005549633 dracut[1288]: *** Including module: squash ***
Dec  7 14:09:36 np0005549633 dracut[1288]: *** Including modules done ***
Dec  7 14:09:36 np0005549633 dracut[1288]: *** Installing kernel module dependencies ***
Dec  7 14:09:37 np0005549633 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 14:09:37 np0005549633 dracut[1288]: *** Installing kernel module dependencies done ***
Dec  7 14:09:37 np0005549633 dracut[1288]: *** Resolving executable dependencies ***
Dec  7 14:09:39 np0005549633 dracut[1288]: *** Resolving executable dependencies done ***
Dec  7 14:09:39 np0005549633 dracut[1288]: *** Generating early-microcode cpio image ***
Dec  7 14:09:39 np0005549633 dracut[1288]: *** Store current command line parameters ***
Dec  7 14:09:39 np0005549633 dracut[1288]: Stored kernel commandline:
Dec  7 14:09:39 np0005549633 dracut[1288]: No dracut internal kernel commandline stored in the initramfs
Dec  7 14:09:39 np0005549633 dracut[1288]: *** Install squash loader ***
Dec  7 14:09:40 np0005549633 dracut[1288]: *** Squashing the files inside the initramfs ***
Dec  7 14:09:41 np0005549633 dracut[1288]: *** Squashing the files inside the initramfs done ***
Dec  7 14:09:41 np0005549633 dracut[1288]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  7 14:09:41 np0005549633 dracut[1288]: *** Hardlinking files ***
Dec  7 14:09:41 np0005549633 dracut[1288]: *** Hardlinking files done ***
Dec  7 14:09:42 np0005549633 dracut[1288]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  7 14:09:42 np0005549633 kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Dec  7 14:09:42 np0005549633 kdumpctl[1020]: kdump: Starting kdump: [OK]
Dec  7 14:09:42 np0005549633 systemd[1]: Finished Crash recovery kernel arming.
Dec  7 14:09:42 np0005549633 systemd[1]: Startup finished in 1.631s (kernel) + 3.244s (initrd) + 18.651s (userspace) = 23.528s.
Dec  7 14:09:43 np0005549633 systemd[1]: Created slice User Slice of UID 1000.
Dec  7 14:09:43 np0005549633 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  7 14:09:43 np0005549633 systemd-logind[797]: New session 1 of user zuul.
Dec  7 14:09:43 np0005549633 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  7 14:09:43 np0005549633 systemd[1]: Starting User Manager for UID 1000...
Dec  7 14:09:43 np0005549633 systemd[4299]: Queued start job for default target Main User Target.
Dec  7 14:09:43 np0005549633 systemd[4299]: Created slice User Application Slice.
Dec  7 14:09:43 np0005549633 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 14:09:43 np0005549633 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 14:09:43 np0005549633 systemd[4299]: Reached target Paths.
Dec  7 14:09:43 np0005549633 systemd[4299]: Reached target Timers.
Dec  7 14:09:43 np0005549633 systemd[4299]: Starting D-Bus User Message Bus Socket...
Dec  7 14:09:43 np0005549633 systemd[4299]: Starting Create User's Volatile Files and Directories...
Dec  7 14:09:43 np0005549633 systemd[4299]: Finished Create User's Volatile Files and Directories.
Dec  7 14:09:43 np0005549633 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Dec  7 14:09:43 np0005549633 systemd[4299]: Reached target Sockets.
Dec  7 14:09:43 np0005549633 systemd[4299]: Reached target Basic System.
Dec  7 14:09:43 np0005549633 systemd[4299]: Reached target Main User Target.
Dec  7 14:09:43 np0005549633 systemd[4299]: Startup finished in 173ms.
Dec  7 14:09:43 np0005549633 systemd[1]: Started User Manager for UID 1000.
Dec  7 14:09:43 np0005549633 systemd[1]: Started Session 1 of User zuul.
Dec  7 14:09:44 np0005549633 python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:09:46 np0005549633 python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:09:55 np0005549633 python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:09:56 np0005549633 python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  7 14:09:56 np0005549633 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 14:09:58 np0005549633 python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQ1K/thYK/ci7pWBAAPN2sge/qOpFDt8MiMhVICYxMYUVnuMp/O+iLvJLJ5kDjJ6H1FQaIcqJmt+t0IASLdFZXToLuleBQ8q2zpD/xZ3a5zoVaMedH2BI77dLRVO6yGlS/tKVxSDt6OIOD/CFt+JgXffn2YfMtwNsQWoSySpIaSKq5GB8sKfseaUtPFhmFzyfxD0TehlDOVpGPOLQPoH7SycNYV8vFY1iWNcUgDJ211BiQvr/H5jUGzUKQ2CEIZx/ScuYXUu2eJDUmDYv2Ld9h7KlKfevQG1eMo7/82puwuJ1E+gL2cZ35HSysqsi0GDk+dGz7eJvo3D2GKIYyxGRD/k+ds7FwTSr8LL40MRF9NAN8FernbdvEIr2IwGiH0jZHWg7Uvs/4NRZmADOIqLJRN0zXR7QshuuTRo985QSpCs76bLnW9kptZMoVpXnciHiAuRxXeSm4fMr3I1FNMdEmK32KtuNC2mKb0mmWQ4gvJ88TNvmF79ZC0O2AFbO9VZs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:09:59 np0005549633 python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:09:59 np0005549633 python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:10:00 np0005549633 python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765134599.2950358-251-243828735220174/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=30aed79e792c4489be0d6164f3e72268_id_rsa follow=False checksum=3d2cf055b16d81aa9b262bc101c857ec61e4a705 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:00 np0005549633 python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:10:01 np0005549633 python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765134600.3013065-306-7207847741915/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=30aed79e792c4489be0d6164f3e72268_id_rsa.pub follow=False checksum=dbf031ee36cc3c651609eeb689730eb1f50a29d8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:02 np0005549633 python3[4972]: ansible-ping Invoked with data=pong
Dec  7 14:10:03 np0005549633 python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:10:06 np0005549633 python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  7 14:10:07 np0005549633 python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:08 np0005549633 python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:08 np0005549633 python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:08 np0005549633 python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:08 np0005549633 python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:09 np0005549633 python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:10 np0005549633 python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:11 np0005549633 python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:10:12 np0005549633 python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765134611.1944997-31-182653573088879/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:12 np0005549633 python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:13 np0005549633 python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:13 np0005549633 python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:13 np0005549633 python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:14 np0005549633 python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:14 np0005549633 python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:14 np0005549633 python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:14 np0005549633 python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:15 np0005549633 python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:15 np0005549633 python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:15 np0005549633 python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:16 np0005549633 python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:16 np0005549633 python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:16 np0005549633 python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:16 np0005549633 python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:17 np0005549633 python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:17 np0005549633 python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:17 np0005549633 python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:18 np0005549633 python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:18 np0005549633 python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:18 np0005549633 python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:19 np0005549633 python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:19 np0005549633 python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:19 np0005549633 python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:19 np0005549633 python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:20 np0005549633 python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:10:22 np0005549633 python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  7 14:10:23 np0005549633 systemd[1]: Starting Time & Date Service...
Dec  7 14:10:23 np0005549633 systemd[1]: Started Time & Date Service.
Dec  7 14:10:23 np0005549633 systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Dec  7 14:10:23 np0005549633 python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:24 np0005549633 python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:10:24 np0005549633 python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765134623.7821717-251-79898679781048/source _original_basename=tmpr_3a4u1h follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:25 np0005549633 python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:10:25 np0005549633 python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765134625.0342739-301-17220227628068/source _original_basename=tmpqv6jbyog follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:26 np0005549633 python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:10:26 np0005549633 python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765134626.3056233-381-77619757534336/source _original_basename=tmpq2b720vw follow=False checksum=1cc2ea2b76967ada2d4710a35e138c3751da2100 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:27 np0005549633 python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:10:27 np0005549633 python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:10:28 np0005549633 python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:10:28 np0005549633 python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765134628.0614495-451-255449330476696/source _original_basename=tmpvdzdn7el follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:29 np0005549633 python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-8343-96b3-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:10:30 np0005549633 python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-8343-96b3-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  7 14:10:31 np0005549633 python3[6915]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:48 np0005549633 python3[6941]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:10:53 np0005549633 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  7 14:11:33 np0005549633 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  7 14:11:33 np0005549633 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9176] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  7 14:11:33 np0005549633 systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9331] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9358] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9362] device (eth1): carrier: link connected
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9363] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9369] policy: auto-activating connection 'Wired connection 1' (8a08f91e-7934-3c52-b9b7-ec55fb646221)
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9373] device (eth1): Activation: starting connection 'Wired connection 1' (8a08f91e-7934-3c52-b9b7-ec55fb646221)
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9374] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9376] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9379] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:11:33 np0005549633 NetworkManager[859]: <info>  [1765134693.9383] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:11:34 np0005549633 python3[6971]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-0fe5-ba3b-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:11:44 np0005549633 python3[7051]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:11:45 np0005549633 python3[7124]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765134704.6246095-104-182416730901045/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1347bb749d433818d933c59fe476a0e4afcb8872 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:11:46 np0005549633 python3[7174]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:11:46 np0005549633 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  7 14:11:46 np0005549633 systemd[1]: Stopped Network Manager Wait Online.
Dec  7 14:11:46 np0005549633 systemd[1]: Stopping Network Manager Wait Online...
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1068] caught SIGTERM, shutting down normally.
Dec  7 14:11:46 np0005549633 systemd[1]: Stopping Network Manager...
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1082] dhcp4 (eth0): canceled DHCP transaction
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1082] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1082] dhcp4 (eth0): state changed no lease
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1087] manager: NetworkManager state is now CONNECTING
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1156] dhcp4 (eth1): canceled DHCP transaction
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1156] dhcp4 (eth1): state changed no lease
Dec  7 14:11:46 np0005549633 NetworkManager[859]: <info>  [1765134706.1226] exiting (success)
Dec  7 14:11:46 np0005549633 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 14:11:46 np0005549633 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  7 14:11:46 np0005549633 systemd[1]: Stopped Network Manager.
Dec  7 14:11:46 np0005549633 systemd[1]: NetworkManager.service: Consumed 1.033s CPU time, 10.2M memory peak.
Dec  7 14:11:46 np0005549633 systemd[1]: Starting Network Manager...
Dec  7 14:11:46 np0005549633 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.1649] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6e15a106-c59c-4f7b-87e9-49ee1e7fa39f)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.1652] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.1703] manager[0x558ec3fdf070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  7 14:11:46 np0005549633 systemd[1]: Starting Hostname Service...
Dec  7 14:11:46 np0005549633 systemd[1]: Started Hostname Service.
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2630] hostname: hostname: using hostnamed
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2634] hostname: static hostname changed from (none) to "np0005549633.novalocal"
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2640] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2645] manager[0x558ec3fdf070]: rfkill: Wi-Fi hardware radio set enabled
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2645] manager[0x558ec3fdf070]: rfkill: WWAN hardware radio set enabled
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2677] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2677] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2678] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2678] manager: Networking is enabled by state file
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2681] settings: Loaded settings plugin: keyfile (internal)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2685] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2708] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2718] dhcp: init: Using DHCP client 'internal'
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2720] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2724] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2730] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2736] device (lo): Activation: starting connection 'lo' (488826b8-286b-4022-bb2f-8a62b46cf9ae)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2741] device (eth0): carrier: link connected
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2745] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2749] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2749] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2754] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2759] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2763] device (eth1): carrier: link connected
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2767] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2771] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (8a08f91e-7934-3c52-b9b7-ec55fb646221) (indicated)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2771] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2775] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2780] device (eth1): Activation: starting connection 'Wired connection 1' (8a08f91e-7934-3c52-b9b7-ec55fb646221)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2791] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  7 14:11:46 np0005549633 systemd[1]: Started Network Manager.
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2797] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2800] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2803] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2805] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2809] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2812] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2815] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2819] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2835] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2838] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2853] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2860] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2890] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2895] dhcp4 (eth0): state changed new lease, address=38.102.83.53
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2906] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2917] device (lo): Activation: successful, device activated.
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.2933] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  7 14:11:46 np0005549633 systemd[1]: Starting Network Manager Wait Online...
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.3030] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.3051] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.3054] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.3061] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.3066] device (eth0): Activation: successful, device activated.
Dec  7 14:11:46 np0005549633 NetworkManager[7178]: <info>  [1765134706.3073] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  7 14:11:46 np0005549633 python3[7258]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-0fe5-ba3b-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:11:56 np0005549633 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 14:12:16 np0005549633 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.2995] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 14:12:31 np0005549633 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 14:12:31 np0005549633 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3443] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3447] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3459] device (eth1): Activation: successful, device activated.
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3468] manager: startup complete
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3472] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <warn>  [1765134751.3485] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3493] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  7 14:12:31 np0005549633 systemd[1]: Finished Network Manager Wait Online.
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3636] dhcp4 (eth1): canceled DHCP transaction
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3636] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3636] dhcp4 (eth1): state changed no lease
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3651] policy: auto-activating connection 'ci-private-network' (957ed490-bc91-523c-8bac-06d1fa555c9d)
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3655] device (eth1): Activation: starting connection 'ci-private-network' (957ed490-bc91-523c-8bac-06d1fa555c9d)
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3656] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3659] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3666] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3674] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3712] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3714] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:12:31 np0005549633 NetworkManager[7178]: <info>  [1765134751.3721] device (eth1): Activation: successful, device activated.
Dec  7 14:12:35 np0005549633 systemd[4299]: Starting Mark boot as successful...
Dec  7 14:12:35 np0005549633 systemd[4299]: Finished Mark boot as successful.
Dec  7 14:12:41 np0005549633 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 14:12:46 np0005549633 systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Dec  7 14:13:54 np0005549633 systemd-logind[797]: New session 3 of user zuul.
Dec  7 14:13:54 np0005549633 systemd[1]: Started Session 3 of User zuul.
Dec  7 14:13:54 np0005549633 python3[7369]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:13:54 np0005549633 python3[7442]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765134834.314125-373-236996369828767/source _original_basename=tmp83nvx9p8 follow=False checksum=c4ce659b90c98d7ec69f62e3505caf46cd0c1c52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:13:59 np0005549633 systemd[1]: session-3.scope: Deactivated successfully.
Dec  7 14:13:59 np0005549633 systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Dec  7 14:13:59 np0005549633 systemd-logind[797]: Removed session 3.
Dec  7 14:15:35 np0005549633 systemd[4299]: Created slice User Background Tasks Slice.
Dec  7 14:15:35 np0005549633 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Dec  7 14:15:35 np0005549633 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Dec  7 14:21:44 np0005549633 systemd-logind[797]: New session 4 of user zuul.
Dec  7 14:21:44 np0005549633 systemd[1]: Started Session 4 of User zuul.
Dec  7 14:21:44 np0005549633 python3[7504]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-3b28-c97a-000000001cee-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:21:44 np0005549633 python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:21:45 np0005549633 python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:21:45 np0005549633 python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:21:45 np0005549633 python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:21:46 np0005549633 python3[7637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:21:47 np0005549633 python3[7715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:21:47 np0005549633 python3[7788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765135306.821831-521-46784609824570/source _original_basename=tmpsoeg7_vp follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:21:48 np0005549633 python3[7838]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  7 14:21:48 np0005549633 systemd[1]: Reloading.
Dec  7 14:21:48 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:21:50 np0005549633 python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  7 14:21:50 np0005549633 python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:21:50 np0005549633 python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:21:51 np0005549633 python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:21:51 np0005549633 python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:21:52 np0005549633 python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-3b28-c97a-000000001cf5-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:21:52 np0005549633 python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 14:21:55 np0005549633 systemd[1]: session-4.scope: Deactivated successfully.
Dec  7 14:21:55 np0005549633 systemd[1]: session-4.scope: Consumed 4.051s CPU time.
Dec  7 14:21:55 np0005549633 systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Dec  7 14:21:55 np0005549633 systemd-logind[797]: Removed session 4.
Dec  7 14:21:57 np0005549633 systemd-logind[797]: New session 5 of user zuul.
Dec  7 14:21:57 np0005549633 systemd[1]: Started Session 5 of User zuul.
Dec  7 14:21:57 np0005549633 python3[8095]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 14:22:14 np0005549633 kernel: SELinux:  Converting 385 SID table entries...
Dec  7 14:22:14 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:22:14 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:22:14 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:22:14 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:22:14 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:22:14 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:22:14 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:22:25 np0005549633 kernel: SELinux:  Converting 385 SID table entries...
Dec  7 14:22:25 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:22:25 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:22:25 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:22:25 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:22:25 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:22:25 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:22:25 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:22:35 np0005549633 kernel: SELinux:  Converting 385 SID table entries...
Dec  7 14:22:35 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:22:35 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:22:35 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:22:35 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:22:35 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:22:35 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:22:35 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:22:36 np0005549633 setsebool[8162]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  7 14:22:36 np0005549633 setsebool[8162]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  7 14:22:48 np0005549633 kernel: SELinux:  Converting 388 SID table entries...
Dec  7 14:22:48 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:22:48 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:22:48 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:22:48 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:22:48 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:22:48 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:22:48 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:23:06 np0005549633 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  7 14:23:06 np0005549633 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 14:23:06 np0005549633 systemd[1]: Starting man-db-cache-update.service...
Dec  7 14:23:06 np0005549633 systemd[1]: Reloading.
Dec  7 14:23:06 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:23:06 np0005549633 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 14:23:12 np0005549633 python3[13892]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-2de1-084b-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:23:13 np0005549633 kernel: evm: overlay not supported
Dec  7 14:23:13 np0005549633 systemd[4299]: Starting D-Bus User Message Bus...
Dec  7 14:23:13 np0005549633 dbus-broker-launch[14252]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  7 14:23:13 np0005549633 dbus-broker-launch[14252]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  7 14:23:13 np0005549633 systemd[4299]: Started D-Bus User Message Bus.
Dec  7 14:23:13 np0005549633 dbus-broker-lau[14252]: Ready
Dec  7 14:23:13 np0005549633 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  7 14:23:13 np0005549633 systemd[4299]: Created slice Slice /user.
Dec  7 14:23:13 np0005549633 systemd[4299]: podman-14184.scope: unit configures an IP firewall, but not running as root.
Dec  7 14:23:13 np0005549633 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Dec  7 14:23:13 np0005549633 systemd[4299]: Started podman-14184.scope.
Dec  7 14:23:14 np0005549633 systemd[4299]: Started podman-pause-523f9dd6.scope.
Dec  7 14:23:15 np0005549633 python3[14999]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.44:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.44:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:23:15 np0005549633 python3[14999]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  7 14:23:16 np0005549633 systemd[1]: session-5.scope: Deactivated successfully.
Dec  7 14:23:16 np0005549633 systemd[1]: session-5.scope: Consumed 1min 6.125s CPU time.
Dec  7 14:23:16 np0005549633 systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Dec  7 14:23:16 np0005549633 systemd-logind[797]: Removed session 5.
Dec  7 14:23:39 np0005549633 systemd-logind[797]: New session 6 of user zuul.
Dec  7 14:23:39 np0005549633 systemd[1]: Started Session 6 of User zuul.
Dec  7 14:23:39 np0005549633 python3[22826]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFVflu4Q7NI292kggyeJx8eMn66Es6gj1md0StyJZvnIlKciXo7BTWjzFITCGUZW6+UmCA7ydnD2nKpNDzRvdgU= zuul@np0005549632.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:23:40 np0005549633 python3[22999]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFVflu4Q7NI292kggyeJx8eMn66Es6gj1md0StyJZvnIlKciXo7BTWjzFITCGUZW6+UmCA7ydnD2nKpNDzRvdgU= zuul@np0005549632.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:23:41 np0005549633 python3[23374]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005549633.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  7 14:23:41 np0005549633 python3[23648]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFVflu4Q7NI292kggyeJx8eMn66Es6gj1md0StyJZvnIlKciXo7BTWjzFITCGUZW6+UmCA7ydnD2nKpNDzRvdgU= zuul@np0005549632.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  7 14:23:42 np0005549633 python3[23951]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:23:42 np0005549633 python3[24263]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765135421.995536-167-149112544470926/source _original_basename=tmpa6jchw21 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:23:43 np0005549633 python3[24673]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  7 14:23:43 np0005549633 systemd[1]: Starting Hostname Service...
Dec  7 14:23:43 np0005549633 systemd[1]: Started Hostname Service.
Dec  7 14:23:43 np0005549633 systemd-hostnamed[24771]: Changed pretty hostname to 'compute-0'
Dec  7 14:23:43 np0005549633 systemd-hostnamed[24771]: Hostname set to <compute-0> (static)
Dec  7 14:23:43 np0005549633 NetworkManager[7178]: <info>  [1765135423.8152] hostname: static hostname changed from "np0005549633.novalocal" to "compute-0"
Dec  7 14:23:43 np0005549633 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 14:23:43 np0005549633 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 14:23:44 np0005549633 systemd[1]: session-6.scope: Deactivated successfully.
Dec  7 14:23:44 np0005549633 systemd[1]: session-6.scope: Consumed 2.191s CPU time.
Dec  7 14:23:44 np0005549633 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Dec  7 14:23:44 np0005549633 systemd-logind[797]: Removed session 6.
Dec  7 14:23:53 np0005549633 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 14:23:57 np0005549633 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 14:23:57 np0005549633 systemd[1]: Finished man-db-cache-update.service.
Dec  7 14:23:57 np0005549633 systemd[1]: man-db-cache-update.service: Consumed 1min 1.911s CPU time.
Dec  7 14:23:57 np0005549633 systemd[1]: run-rb8036c16fb1c4345a0f4b9c089aca5a6.service: Deactivated successfully.
Dec  7 14:24:13 np0005549633 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 14:24:35 np0005549633 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  7 14:24:35 np0005549633 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  7 14:24:35 np0005549633 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  7 14:24:35 np0005549633 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  7 14:27:20 np0005549633 systemd-logind[797]: New session 7 of user zuul.
Dec  7 14:27:20 np0005549633 systemd[1]: Started Session 7 of User zuul.
Dec  7 14:27:20 np0005549633 python3[30061]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:27:22 np0005549633 python3[30177]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:27:23 np0005549633 python3[30250]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765135642.5528448-33975-251975944063731/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:27:23 np0005549633 python3[30276]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:27:24 np0005549633 python3[30349]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765135642.5528448-33975-251975944063731/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:27:24 np0005549633 python3[30375]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:27:24 np0005549633 python3[30448]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765135642.5528448-33975-251975944063731/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:27:25 np0005549633 python3[30474]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:27:25 np0005549633 python3[30547]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765135642.5528448-33975-251975944063731/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:27:25 np0005549633 python3[30573]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:27:26 np0005549633 python3[30646]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765135642.5528448-33975-251975944063731/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:27:26 np0005549633 python3[30672]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:27:26 np0005549633 python3[30745]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765135642.5528448-33975-251975944063731/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:27:27 np0005549633 python3[30771]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:27:27 np0005549633 python3[30844]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765135642.5528448-33975-251975944063731/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:27:39 np0005549633 python3[30902]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:32:38 np0005549633 systemd[1]: session-7.scope: Deactivated successfully.
Dec  7 14:32:38 np0005549633 systemd[1]: session-7.scope: Consumed 5.292s CPU time.
Dec  7 14:32:38 np0005549633 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Dec  7 14:32:38 np0005549633 systemd-logind[797]: Removed session 7.
Dec  7 14:39:19 np0005549633 systemd-logind[797]: New session 8 of user zuul.
Dec  7 14:39:19 np0005549633 systemd[1]: Started Session 8 of User zuul.
Dec  7 14:39:20 np0005549633 python3.9[31065]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:39:22 np0005549633 python3.9[31246]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:39:31 np0005549633 systemd[1]: session-8.scope: Deactivated successfully.
Dec  7 14:39:31 np0005549633 systemd[1]: session-8.scope: Consumed 7.616s CPU time.
Dec  7 14:39:31 np0005549633 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Dec  7 14:39:31 np0005549633 systemd-logind[797]: Removed session 8.
Dec  7 14:39:46 np0005549633 systemd-logind[797]: New session 9 of user zuul.
Dec  7 14:39:46 np0005549633 systemd[1]: Started Session 9 of User zuul.
Dec  7 14:39:47 np0005549633 python3.9[31456]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  7 14:39:49 np0005549633 python3.9[31630]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:39:50 np0005549633 python3.9[31782]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:39:51 np0005549633 python3.9[31935]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:39:52 np0005549633 python3.9[32087]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:39:53 np0005549633 python3.9[32239]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:39:54 np0005549633 python3.9[32362]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136392.9978023-177-178526146613397/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:39:55 np0005549633 python3.9[32514]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:39:56 np0005549633 python3.9[32670]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:39:57 np0005549633 python3.9[32822]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:39:57 np0005549633 python3.9[32972]: ansible-ansible.builtin.service_facts Invoked
Dec  7 14:40:05 np0005549633 python3.9[33226]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:40:06 np0005549633 python3.9[33376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:40:07 np0005549633 python3.9[33530]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:40:08 np0005549633 python3.9[33688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:40:09 np0005549633 python3.9[33772]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:40:53 np0005549633 systemd[1]: Reloading.
Dec  7 14:40:53 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:40:53 np0005549633 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  7 14:40:53 np0005549633 systemd[1]: Reloading.
Dec  7 14:40:53 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:40:54 np0005549633 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  7 14:40:54 np0005549633 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  7 14:40:54 np0005549633 systemd[1]: Reloading.
Dec  7 14:40:54 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:40:54 np0005549633 systemd[1]: Starting dnf makecache...
Dec  7 14:40:54 np0005549633 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  7 14:40:54 np0005549633 dnf[34058]: Failed determining last makecache time.
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-barbican-42b4c41831408a8e323 156 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  7 14:40:54 np0005549633 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  7 14:40:54 np0005549633 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 186 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-cinder-1c00d6490d88e436f26ef 152 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-python-stevedore-c4acc5639fd2329372142 147 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-python-cloudkitty-tests-tempest-2c80f8 144 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-os-refresh-config-9bfc52b5049be2d8de61 152 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 153 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-python-designate-tests-tempest-347fdbc 157 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-glance-1fd12c29b339f30fe823e 159 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 159 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-manila-3c01b7181572c95dac462 156 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-python-whitebox-neutron-tests-tempest- 162 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-octavia-ba397f07a7331190208c 161 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-watcher-c014f81a8647287f6dcc 165 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-ansible-config_template-5ccaa22121a7ff 199 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 200 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-swift-dc98a8463506ac520c469a 202 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-python-tempestconf-8515371b7cceebd4282 201 kB/s | 3.0 kB     00:00
Dec  7 14:40:54 np0005549633 dnf[34058]: delorean-openstack-heat-ui-013accbfd179753bc3f0 208 kB/s | 3.0 kB     00:00
Dec  7 14:40:55 np0005549633 dnf[34058]: CentOS Stream 9 - BaseOS                         29 kB/s | 7.0 kB     00:00
Dec  7 14:40:55 np0005549633 dnf[34058]: CentOS Stream 9 - AppStream                      30 kB/s | 7.1 kB     00:00
Dec  7 14:40:55 np0005549633 dnf[34058]: CentOS Stream 9 - CRB                            81 kB/s | 6.9 kB     00:00
Dec  7 14:40:55 np0005549633 dnf[34058]: CentOS Stream 9 - Extras packages                35 kB/s | 8.0 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: dlrn-antelope-testing                           180 kB/s | 3.0 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: dlrn-antelope-build-deps                        187 kB/s | 3.0 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: centos9-rabbitmq                                130 kB/s | 3.0 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: centos9-storage                                 129 kB/s | 3.0 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: centos9-opstools                                105 kB/s | 3.0 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: NFV SIG OpenvSwitch                             106 kB/s | 3.0 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: repo-setup-centos-appstream                     185 kB/s | 4.4 kB     00:00
Dec  7 14:40:56 np0005549633 irqbalance[791]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  7 14:40:56 np0005549633 irqbalance[791]: IRQ 27 affinity is now unmanaged
Dec  7 14:40:56 np0005549633 dnf[34058]: repo-setup-centos-baseos                        150 kB/s | 3.9 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: repo-setup-centos-highavailability              154 kB/s | 3.9 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: repo-setup-centos-powertools                    204 kB/s | 4.3 kB     00:00
Dec  7 14:40:56 np0005549633 dnf[34058]: Extra Packages for Enterprise Linux 9 - x86_64  238 kB/s |  32 kB     00:00
Dec  7 14:40:57 np0005549633 dnf[34058]: Metadata cache created.
Dec  7 14:40:57 np0005549633 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  7 14:40:57 np0005549633 systemd[1]: Finished dnf makecache.
Dec  7 14:40:57 np0005549633 systemd[1]: dnf-makecache.service: Consumed 1.792s CPU time.
Dec  7 14:41:59 np0005549633 kernel: SELinux:  Converting 2718 SID table entries...
Dec  7 14:41:59 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:41:59 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:41:59 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:41:59 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:41:59 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:41:59 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:41:59 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:42:00 np0005549633 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  7 14:42:00 np0005549633 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 14:42:00 np0005549633 systemd[1]: Starting man-db-cache-update.service...
Dec  7 14:42:00 np0005549633 systemd[1]: Reloading.
Dec  7 14:42:00 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:42:00 np0005549633 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 14:42:01 np0005549633 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 14:42:01 np0005549633 systemd[1]: Finished man-db-cache-update.service.
Dec  7 14:42:01 np0005549633 systemd[1]: man-db-cache-update.service: Consumed 1.184s CPU time.
Dec  7 14:42:01 np0005549633 systemd[1]: run-r3288cd6dfac748ebbcc9fda0041e8996.service: Deactivated successfully.
Dec  7 14:42:01 np0005549633 python3.9[35342]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:42:03 np0005549633 python3.9[35623]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  7 14:42:04 np0005549633 python3.9[35775]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  7 14:42:06 np0005549633 irqbalance[791]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  7 14:42:06 np0005549633 irqbalance[791]: IRQ 26 affinity is now unmanaged
Dec  7 14:42:07 np0005549633 python3.9[35929]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:42:09 np0005549633 python3.9[36081]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  7 14:42:10 np0005549633 python3.9[36233]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:42:13 np0005549633 python3.9[36385]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:42:14 np0005549633 python3.9[36508]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136530.8003557-666-157427894340659/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9b2acdcc68e6819d7792167ef65a6685ced49ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:42:19 np0005549633 python3.9[36660]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:42:20 np0005549633 python3.9[36813]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:42:20 np0005549633 python3.9[36966]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:42:22 np0005549633 python3.9[37118]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  7 14:42:22 np0005549633 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 14:42:22 np0005549633 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 14:42:23 np0005549633 python3.9[37272]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 14:42:24 np0005549633 python3.9[37430]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  7 14:42:25 np0005549633 python3.9[37590]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  7 14:42:26 np0005549633 python3.9[37743]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 14:42:27 np0005549633 python3.9[37901]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  7 14:42:28 np0005549633 python3.9[38053]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:42:31 np0005549633 python3.9[38206]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:42:32 np0005549633 python3.9[38358]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:42:32 np0005549633 python3.9[38481]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765136551.6468089-1023-131226104856248/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:42:33 np0005549633 python3.9[38633]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:42:34 np0005549633 systemd[1]: Starting Load Kernel Modules...
Dec  7 14:42:34 np0005549633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  7 14:42:34 np0005549633 kernel: Bridge firewalling registered
Dec  7 14:42:34 np0005549633 systemd-modules-load[38637]: Inserted module 'br_netfilter'
Dec  7 14:42:34 np0005549633 systemd[1]: Finished Load Kernel Modules.
Dec  7 14:42:34 np0005549633 python3.9[38792]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:42:35 np0005549633 python3.9[38915]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765136554.374622-1092-65167666080064/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:42:37 np0005549633 python3.9[39067]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:42:40 np0005549633 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  7 14:42:40 np0005549633 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Dec  7 14:42:41 np0005549633 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 14:42:41 np0005549633 systemd[1]: Starting man-db-cache-update.service...
Dec  7 14:42:41 np0005549633 systemd[1]: Reloading.
Dec  7 14:42:41 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:42:41 np0005549633 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 14:42:43 np0005549633 python3.9[40921]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:42:44 np0005549633 python3.9[42305]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  7 14:42:44 np0005549633 python3.9[43059]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:42:44 np0005549633 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 14:42:44 np0005549633 systemd[1]: Finished man-db-cache-update.service.
Dec  7 14:42:44 np0005549633 systemd[1]: man-db-cache-update.service: Consumed 4.470s CPU time.
Dec  7 14:42:44 np0005549633 systemd[1]: run-r56b4a2ce53694cf0913c22358adca417.service: Deactivated successfully.
Dec  7 14:42:46 np0005549633 python3.9[43228]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:42:46 np0005549633 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  7 14:42:46 np0005549633 systemd[1]: Starting Authorization Manager...
Dec  7 14:42:46 np0005549633 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  7 14:42:46 np0005549633 polkitd[43445]: Started polkitd version 0.117
Dec  7 14:42:46 np0005549633 systemd[1]: Started Authorization Manager.
Dec  7 14:42:48 np0005549633 python3.9[43615]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:42:48 np0005549633 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  7 14:42:48 np0005549633 systemd[1]: tuned.service: Deactivated successfully.
Dec  7 14:42:48 np0005549633 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  7 14:42:48 np0005549633 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  7 14:42:48 np0005549633 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  7 14:42:49 np0005549633 python3.9[43777]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  7 14:42:53 np0005549633 python3.9[43929]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:42:53 np0005549633 systemd[1]: Reloading.
Dec  7 14:42:54 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:42:54 np0005549633 python3.9[44117]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:42:54 np0005549633 systemd[1]: Reloading.
Dec  7 14:42:54 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:42:57 np0005549633 python3.9[44305]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:42:58 np0005549633 python3.9[44458]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:42:58 np0005549633 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  7 14:42:59 np0005549633 python3.9[44611]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:43:01 np0005549633 python3.9[44773]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:43:02 np0005549633 python3.9[44926]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:43:02 np0005549633 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  7 14:43:02 np0005549633 systemd[1]: Stopped Apply Kernel Variables.
Dec  7 14:43:02 np0005549633 systemd[1]: Stopping Apply Kernel Variables...
Dec  7 14:43:02 np0005549633 systemd[1]: Starting Apply Kernel Variables...
Dec  7 14:43:02 np0005549633 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  7 14:43:02 np0005549633 systemd[1]: Finished Apply Kernel Variables.
Dec  7 14:43:02 np0005549633 systemd[1]: session-9.scope: Deactivated successfully.
Dec  7 14:43:02 np0005549633 systemd[1]: session-9.scope: Consumed 2min 18.704s CPU time.
Dec  7 14:43:02 np0005549633 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Dec  7 14:43:02 np0005549633 systemd-logind[797]: Removed session 9.
Dec  7 14:43:08 np0005549633 systemd-logind[797]: New session 10 of user zuul.
Dec  7 14:43:08 np0005549633 systemd[1]: Started Session 10 of User zuul.
Dec  7 14:43:09 np0005549633 python3.9[45109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:43:10 np0005549633 python3.9[45265]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  7 14:43:11 np0005549633 python3.9[45418]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  7 14:43:12 np0005549633 python3.9[45576]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  7 14:43:14 np0005549633 python3.9[45736]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:43:14 np0005549633 python3.9[45820]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  7 14:43:18 np0005549633 python3.9[45983]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:43:31 np0005549633 kernel: SELinux:  Converting 2730 SID table entries...
Dec  7 14:43:31 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:43:31 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:43:31 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:43:31 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:43:31 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:43:31 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:43:31 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:43:32 np0005549633 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  7 14:43:32 np0005549633 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  7 14:43:33 np0005549633 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 14:43:33 np0005549633 systemd[1]: Starting man-db-cache-update.service...
Dec  7 14:43:33 np0005549633 systemd[1]: Reloading.
Dec  7 14:43:33 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:43:33 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:43:33 np0005549633 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 14:43:34 np0005549633 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 14:43:34 np0005549633 systemd[1]: Finished man-db-cache-update.service.
Dec  7 14:43:34 np0005549633 systemd[1]: run-r9a0d419fbbe14a31bb6572f3df4bb096.service: Deactivated successfully.
Dec  7 14:43:35 np0005549633 python3.9[47082]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  7 14:43:35 np0005549633 systemd[1]: Reloading.
Dec  7 14:43:35 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:43:35 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:43:35 np0005549633 systemd[1]: Starting Open vSwitch Database Unit...
Dec  7 14:43:35 np0005549633 chown[47124]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  7 14:43:35 np0005549633 ovs-ctl[47129]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  7 14:43:35 np0005549633 ovs-ctl[47129]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  7 14:43:35 np0005549633 ovs-ctl[47129]: Starting ovsdb-server [  OK  ]
Dec  7 14:43:35 np0005549633 ovs-vsctl[47178]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  7 14:43:35 np0005549633 ovs-vsctl[47198]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"bb3b4840-74fa-41df-8113-c995ec2a4611\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  7 14:43:35 np0005549633 ovs-ctl[47129]: Configuring Open vSwitch system IDs [  OK  ]
Dec  7 14:43:35 np0005549633 ovs-ctl[47129]: Enabling remote OVSDB managers [  OK  ]
Dec  7 14:43:35 np0005549633 ovs-vsctl[47204]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  7 14:43:35 np0005549633 systemd[1]: Started Open vSwitch Database Unit.
Dec  7 14:43:35 np0005549633 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  7 14:43:36 np0005549633 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  7 14:43:36 np0005549633 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  7 14:43:36 np0005549633 kernel: openvswitch: Open vSwitch switching datapath
Dec  7 14:43:36 np0005549633 ovs-ctl[47249]: Inserting openvswitch module [  OK  ]
Dec  7 14:43:36 np0005549633 ovs-ctl[47217]: Starting ovs-vswitchd [  OK  ]
Dec  7 14:43:36 np0005549633 ovs-vsctl[47266]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  7 14:43:36 np0005549633 ovs-ctl[47217]: Enabling remote OVSDB managers [  OK  ]
Dec  7 14:43:36 np0005549633 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  7 14:43:36 np0005549633 systemd[1]: Starting Open vSwitch...
Dec  7 14:43:36 np0005549633 systemd[1]: Finished Open vSwitch.
Dec  7 14:43:37 np0005549633 python3.9[47418]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:43:38 np0005549633 python3.9[47570]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  7 14:43:39 np0005549633 kernel: SELinux:  Converting 2744 SID table entries...
Dec  7 14:43:39 np0005549633 kernel: SELinux:  policy capability network_peer_controls=1
Dec  7 14:43:39 np0005549633 kernel: SELinux:  policy capability open_perms=1
Dec  7 14:43:39 np0005549633 kernel: SELinux:  policy capability extended_socket_class=1
Dec  7 14:43:39 np0005549633 kernel: SELinux:  policy capability always_check_network=0
Dec  7 14:43:39 np0005549633 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  7 14:43:39 np0005549633 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  7 14:43:39 np0005549633 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  7 14:43:41 np0005549633 python3.9[47725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:43:41 np0005549633 dbus-broker-launch[779]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  7 14:43:42 np0005549633 python3.9[47883]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:43:44 np0005549633 python3.9[48036]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:43:46 np0005549633 python3.9[48323]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  7 14:43:47 np0005549633 python3.9[48473]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:43:48 np0005549633 python3.9[48627]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:43:49 np0005549633 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 14:43:49 np0005549633 systemd[1]: Starting man-db-cache-update.service...
Dec  7 14:43:49 np0005549633 systemd[1]: Reloading.
Dec  7 14:43:50 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:43:50 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:43:50 np0005549633 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 14:43:50 np0005549633 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 14:43:50 np0005549633 systemd[1]: Finished man-db-cache-update.service.
Dec  7 14:43:50 np0005549633 systemd[1]: run-r88c6d9330af24ae58ecb5c34c3d0eb04.service: Deactivated successfully.
Dec  7 14:43:51 np0005549633 python3.9[48944]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:43:51 np0005549633 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  7 14:43:51 np0005549633 systemd[1]: Stopped Network Manager Wait Online.
Dec  7 14:43:51 np0005549633 systemd[1]: Stopping Network Manager Wait Online...
Dec  7 14:43:51 np0005549633 systemd[1]: Stopping Network Manager...
Dec  7 14:43:51 np0005549633 NetworkManager[7178]: <info>  [1765136631.7011] caught SIGTERM, shutting down normally.
Dec  7 14:43:51 np0005549633 NetworkManager[7178]: <info>  [1765136631.7034] dhcp4 (eth0): canceled DHCP transaction
Dec  7 14:43:51 np0005549633 NetworkManager[7178]: <info>  [1765136631.7035] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:43:51 np0005549633 NetworkManager[7178]: <info>  [1765136631.7035] dhcp4 (eth0): state changed no lease
Dec  7 14:43:51 np0005549633 NetworkManager[7178]: <info>  [1765136631.7040] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 14:43:51 np0005549633 NetworkManager[7178]: <info>  [1765136631.7144] exiting (success)
Dec  7 14:43:51 np0005549633 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 14:43:51 np0005549633 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 14:43:51 np0005549633 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  7 14:43:51 np0005549633 systemd[1]: Stopped Network Manager.
Dec  7 14:43:51 np0005549633 systemd[1]: NetworkManager.service: Consumed 11.277s CPU time, 4.1M memory peak, read 0B from disk, written 26.0K to disk.
Dec  7 14:43:51 np0005549633 systemd[1]: Starting Network Manager...
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.8008] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:6e15a106-c59c-4f7b-87e9-49ee1e7fa39f)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.8012] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.8084] manager[0x56074f3c6090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  7 14:43:51 np0005549633 systemd[1]: Starting Hostname Service...
Dec  7 14:43:51 np0005549633 systemd[1]: Started Hostname Service.
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9198] hostname: hostname: using hostnamed
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9198] hostname: static hostname changed from (none) to "compute-0"
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9209] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9219] manager[0x56074f3c6090]: rfkill: Wi-Fi hardware radio set enabled
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9219] manager[0x56074f3c6090]: rfkill: WWAN hardware radio set enabled
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9259] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9275] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9277] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9278] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9279] manager: Networking is enabled by state file
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9284] settings: Loaded settings plugin: keyfile (internal)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9291] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9347] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9361] dhcp: init: Using DHCP client 'internal'
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9364] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9375] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9384] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9396] device (lo): Activation: starting connection 'lo' (488826b8-286b-4022-bb2f-8a62b46cf9ae)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9406] device (eth0): carrier: link connected
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9411] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9420] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9420] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9430] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9442] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9451] device (eth1): carrier: link connected
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9457] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9466] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (957ed490-bc91-523c-8bac-06d1fa555c9d) (indicated)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9466] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9474] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9486] device (eth1): Activation: starting connection 'ci-private-network' (957ed490-bc91-523c-8bac-06d1fa555c9d)
Dec  7 14:43:51 np0005549633 systemd[1]: Started Network Manager.
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9496] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9513] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9517] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9521] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9523] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9527] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9539] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9542] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9546] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9552] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9554] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9560] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9572] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9590] dhcp4 (eth0): state changed new lease, address=38.102.83.53
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9596] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9603] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  7 14:43:51 np0005549633 systemd[1]: Starting Network Manager Wait Online...
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9683] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9687] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9693] device (lo): Activation: successful, device activated.
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9700] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9708] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9711] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9714] device (eth1): Activation: successful, device activated.
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9737] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9738] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9741] manager: NetworkManager state is now CONNECTED_SITE
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9744] device (eth0): Activation: successful, device activated.
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9750] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  7 14:43:51 np0005549633 NetworkManager[48956]: <info>  [1765136631.9752] manager: startup complete
Dec  7 14:43:51 np0005549633 systemd[1]: Finished Network Manager Wait Online.
Dec  7 14:43:52 np0005549633 python3.9[49170]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:43:57 np0005549633 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 14:43:57 np0005549633 systemd[1]: Starting man-db-cache-update.service...
Dec  7 14:43:57 np0005549633 systemd[1]: Reloading.
Dec  7 14:43:58 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:43:58 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:43:58 np0005549633 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  7 14:43:59 np0005549633 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 14:43:59 np0005549633 systemd[1]: Finished man-db-cache-update.service.
Dec  7 14:43:59 np0005549633 systemd[1]: run-r0716c60d0ab843bd83e9795bc70cd54a.service: Deactivated successfully.
Dec  7 14:44:00 np0005549633 python3.9[49629]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:44:01 np0005549633 python3.9[49781]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:02 np0005549633 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 14:44:02 np0005549633 python3.9[49935]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:03 np0005549633 python3.9[50087]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:04 np0005549633 python3.9[50239]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:04 np0005549633 python3.9[50391]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:05 np0005549633 python3.9[50543]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:44:06 np0005549633 python3.9[50666]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136645.2204866-647-38262193834701/.source _original_basename=.qi5q7xzc follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:07 np0005549633 python3.9[50818]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:08 np0005549633 python3.9[50970]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  7 14:44:09 np0005549633 python3.9[51122]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:11 np0005549633 python3.9[51549]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  7 14:44:13 np0005549633 ansible-async_wrapper.py[51724]: Invoked with j141784281212 300 /home/zuul/.ansible/tmp/ansible-tmp-1765136652.3262985-845-51017732313535/AnsiballZ_edpm_os_net_config.py _
Dec  7 14:44:13 np0005549633 ansible-async_wrapper.py[51727]: Starting module and watcher
Dec  7 14:44:13 np0005549633 ansible-async_wrapper.py[51727]: Start watching 51728 (300)
Dec  7 14:44:13 np0005549633 ansible-async_wrapper.py[51728]: Start module (51728)
Dec  7 14:44:13 np0005549633 ansible-async_wrapper.py[51724]: Return async_wrapper task started.
Dec  7 14:44:13 np0005549633 python3.9[51729]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  7 14:44:14 np0005549633 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  7 14:44:14 np0005549633 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  7 14:44:14 np0005549633 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  7 14:44:14 np0005549633 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  7 14:44:14 np0005549633 kernel: cfg80211: failed to load regulatory.db
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5087] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5103] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5573] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5574] audit: op="connection-add" uuid="afe6df70-8559-4d21-8f24-d6419a1e800e" name="br-ex-br" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5587] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5588] audit: op="connection-add" uuid="a943daed-ccbf-4c06-957d-5c8b69c70167" name="br-ex-port" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5598] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5599] audit: op="connection-add" uuid="50d99923-6951-4bdc-ac55-d20ff9596e20" name="eth1-port" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5609] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5610] audit: op="connection-add" uuid="2d7e03a8-4525-4cc4-aa17-cb422cb831bd" name="vlan20-port" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5619] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5620] audit: op="connection-add" uuid="497785d7-c8bf-48be-890f-6628c959f8b2" name="vlan21-port" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5629] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5630] audit: op="connection-add" uuid="1bdf18fd-2675-4b7e-896b-3067b56fc844" name="vlan22-port" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5640] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5641] audit: op="connection-add" uuid="7f109878-df0c-48c0-800e-8d19a926c9f7" name="vlan23-port" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5659] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5673] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5674] audit: op="connection-add" uuid="7f221024-2b35-4d31-825a-c45b32f0ae11" name="br-ex-if" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5720] audit: op="connection-update" uuid="957ed490-bc91-523c-8bac-06d1fa555c9d" name="ci-private-network" args="ovs-interface.type,connection.slave-type,connection.timestamp,connection.port-type,connection.controller,connection.master,ipv4.routes,ipv4.never-default,ipv4.dns,ipv4.routing-rules,ipv4.addresses,ipv4.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ipv6.addresses,ipv6.method,ovs-external-ids.data" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5733] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5734] audit: op="connection-add" uuid="8ea3d8b3-7c9a-4ccd-9321-2333ba679709" name="vlan20-if" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5747] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5748] audit: op="connection-add" uuid="34f712a1-ea7d-49e7-bdb2-a03d2540dde2" name="vlan21-if" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5761] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5763] audit: op="connection-add" uuid="3bc012eb-0d96-4588-828c-9657833d1c1c" name="vlan22-if" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5775] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5776] audit: op="connection-add" uuid="c5a3bf0f-2bec-4e40-8e6b-1e83739d41e2" name="vlan23-if" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5786] audit: op="connection-delete" uuid="8a08f91e-7934-3c52-b9b7-ec55fb646221" name="Wired connection 1" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5798] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5810] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5813] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (afe6df70-8559-4d21-8f24-d6419a1e800e)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5814] audit: op="connection-activate" uuid="afe6df70-8559-4d21-8f24-d6419a1e800e" name="br-ex-br" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5815] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5822] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5825] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a943daed-ccbf-4c06-957d-5c8b69c70167)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5827] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5831] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5834] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (50d99923-6951-4bdc-ac55-d20ff9596e20)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5835] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5840] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5843] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (2d7e03a8-4525-4cc4-aa17-cb422cb831bd)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5844] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5849] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5852] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (497785d7-c8bf-48be-890f-6628c959f8b2)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5853] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5858] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5861] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (1bdf18fd-2675-4b7e-896b-3067b56fc844)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5862] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5867] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5869] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (7f109878-df0c-48c0-800e-8d19a926c9f7)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5870] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5871] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5873] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5877] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5880] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5883] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (7f221024-2b35-4d31-825a-c45b32f0ae11)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5883] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5886] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5887] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5888] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5888] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5896] device (eth1): disconnecting for new activation request.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5896] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5898] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5912] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5913] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5915] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5919] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5921] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (8ea3d8b3-7c9a-4ccd-9321-2333ba679709)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5922] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5924] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5926] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5927] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5929] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5934] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5938] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (34f712a1-ea7d-49e7-bdb2-a03d2540dde2)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5939] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5942] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5943] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5945] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5947] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5951] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5955] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (3bc012eb-0d96-4588-828c-9657833d1c1c)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5956] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5959] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5960] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5961] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5963] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5967] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5972] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (c5a3bf0f-2bec-4e40-8e6b-1e83739d41e2)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5973] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5975] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5977] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5977] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5979] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5988] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5990] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5992] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5993] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.5998] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6001] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6003] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6006] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6007] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6012] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6015] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6018] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6019] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6024] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 kernel: ovs-system: entered promiscuous mode
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6027] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6030] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6031] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6036] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6040] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6043] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 kernel: Timeout policy base is empty
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6045] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6049] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6053] dhcp4 (eth0): canceled DHCP transaction
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6053] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6053] dhcp4 (eth0): state changed no lease
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6055] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  7 14:44:15 np0005549633 systemd-udevd[51734]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6064] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6067] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51730 uid=0 result="fail" reason="Device is not activated"
Dec  7 14:44:15 np0005549633 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6112] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6123] dhcp4 (eth0): state changed new lease, address=38.102.83.53
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6129] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6137] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6145] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6209] device (eth1): disconnecting for new activation request.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6212] audit: op="connection-activate" uuid="957ed490-bc91-523c-8bac-06d1fa555c9d" name="ci-private-network" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6224] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  7 14:44:15 np0005549633 kernel: br-ex: entered promiscuous mode
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6397] device (eth1): Activation: starting connection 'ci-private-network' (957ed490-bc91-523c-8bac-06d1fa555c9d)
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6403] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 kernel: vlan22: entered promiscuous mode
Dec  7 14:44:15 np0005549633 systemd-udevd[51735]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6435] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6440] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6445] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6451] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6461] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6463] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6464] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6466] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6468] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6470] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6472] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 kernel: vlan20: entered promiscuous mode
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6480] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6488] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6492] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6495] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6498] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6501] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6503] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6506] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6509] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6512] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6515] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6518] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6521] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6527] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6533] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6537] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6553] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 kernel: vlan21: entered promiscuous mode
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6563] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 systemd-udevd[51733]: Network interface NamePolicy= disabled on kernel command line.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6571] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6576] device (eth1): Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6581] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6583] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6584] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6589] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6592] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6609] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6613] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 kernel: vlan23: entered promiscuous mode
Dec  7 14:44:15 np0005549633 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6661] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6662] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6666] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6701] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6704] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6705] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6709] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6722] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6732] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6740] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6773] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6774] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6776] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6780] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6784] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  7 14:44:15 np0005549633 NetworkManager[48956]: <info>  [1765136655.6788] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  7 14:44:16 np0005549633 NetworkManager[48956]: <info>  [1765136656.8079] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec  7 14:44:16 np0005549633 NetworkManager[48956]: <info>  [1765136656.9953] checkpoint[0x56074f39b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  7 14:44:16 np0005549633 NetworkManager[48956]: <info>  [1765136656.9956] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51730 uid=0 result="success"
Dec  7 14:44:17 np0005549633 python3.9[52087]: ansible-ansible.legacy.async_status Invoked with jid=j141784281212.51724 mode=status _async_dir=/root/.ansible_async
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.2636] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.2653] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.5224] audit: op="networking-control" arg="global-dns-configuration" pid=51730 uid=0 result="success"
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.5251] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.5280] audit: op="networking-control" arg="global-dns-configuration" pid=51730 uid=0 result="success"
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.5739] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.7207] checkpoint[0x56074f39ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  7 14:44:17 np0005549633 NetworkManager[48956]: <info>  [1765136657.7210] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51730 uid=0 result="success"
Dec  7 14:44:17 np0005549633 ansible-async_wrapper.py[51728]: Module complete (51728)
Dec  7 14:44:18 np0005549633 ansible-async_wrapper.py[51727]: Done in kid B.
Dec  7 14:44:20 np0005549633 python3.9[52193]: ansible-ansible.legacy.async_status Invoked with jid=j141784281212.51724 mode=status _async_dir=/root/.ansible_async
Dec  7 14:44:21 np0005549633 python3.9[52293]: ansible-ansible.legacy.async_status Invoked with jid=j141784281212.51724 mode=cleanup _async_dir=/root/.ansible_async
Dec  7 14:44:21 np0005549633 python3.9[52445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:44:21 np0005549633 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  7 14:44:22 np0005549633 python3.9[52570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136661.47842-926-57403435659064/.source.returncode _original_basename=.97ptohij follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:23 np0005549633 python3.9[52722]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:44:23 np0005549633 python3.9[52846]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136662.9379573-974-62357771171092/.source.cfg _original_basename=.lmultsar follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:24 np0005549633 python3.9[52998]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:44:24 np0005549633 systemd[1]: Reloading Network Manager...
Dec  7 14:44:24 np0005549633 NetworkManager[48956]: <info>  [1765136664.9003] audit: op="reload" arg="0" pid=53002 uid=0 result="success"
Dec  7 14:44:24 np0005549633 NetworkManager[48956]: <info>  [1765136664.9009] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  7 14:44:24 np0005549633 systemd[1]: Reloaded Network Manager.
Dec  7 14:44:25 np0005549633 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Dec  7 14:44:25 np0005549633 systemd[1]: session-10.scope: Deactivated successfully.
Dec  7 14:44:25 np0005549633 systemd[1]: session-10.scope: Consumed 53.092s CPU time.
Dec  7 14:44:25 np0005549633 systemd-logind[797]: Removed session 10.
Dec  7 14:44:30 np0005549633 systemd-logind[797]: New session 11 of user zuul.
Dec  7 14:44:30 np0005549633 systemd[1]: Started Session 11 of User zuul.
Dec  7 14:44:31 np0005549633 python3.9[53186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:44:32 np0005549633 python3.9[53340]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:44:34 np0005549633 python3.9[53534]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:44:34 np0005549633 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  7 14:44:35 np0005549633 systemd[1]: session-11.scope: Deactivated successfully.
Dec  7 14:44:35 np0005549633 systemd[1]: session-11.scope: Consumed 2.491s CPU time.
Dec  7 14:44:35 np0005549633 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Dec  7 14:44:35 np0005549633 systemd-logind[797]: Removed session 11.
Dec  7 14:44:41 np0005549633 systemd-logind[797]: New session 12 of user zuul.
Dec  7 14:44:41 np0005549633 systemd[1]: Started Session 12 of User zuul.
Dec  7 14:44:42 np0005549633 python3.9[53716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:44:43 np0005549633 python3.9[53871]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:44:44 np0005549633 python3.9[54027]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:44:45 np0005549633 python3.9[54111]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:44:47 np0005549633 python3.9[54265]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:44:49 np0005549633 python3.9[54460]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:50 np0005549633 python3.9[54612]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:44:50 np0005549633 podman[54613]: 2025-12-07 19:44:50.577074566 +0000 UTC m=+0.046346510 system refresh
Dec  7 14:44:51 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:44:51 np0005549633 python3.9[54775]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:44:52 np0005549633 python3.9[54898]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136690.9415824-197-131243095867982/.source.json follow=False _original_basename=podman_network_config.j2 checksum=a1e76fa2f8da0526759e0f2571eff8e4217117a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:44:53 np0005549633 python3.9[55050]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:44:53 np0005549633 python3.9[55173]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765136692.5496726-242-253497266448223/.source.conf follow=False _original_basename=registries.conf.j2 checksum=ea7e71ddf075bf55e555c64399d15b2ffe005fe9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:44:54 np0005549633 python3.9[55325]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:44:55 np0005549633 python3.9[55477]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:44:55 np0005549633 python3.9[55629]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:44:56 np0005549633 python3.9[55781]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:44:58 np0005549633 python3.9[55933]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:45:00 np0005549633 python3.9[56086]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:45:01 np0005549633 python3.9[56240]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:45:02 np0005549633 python3.9[56392]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:45:03 np0005549633 python3.9[56544]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:45:04 np0005549633 python3.9[56697]: ansible-service_facts Invoked
Dec  7 14:45:04 np0005549633 network[56714]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 14:45:04 np0005549633 network[56715]: 'network-scripts' will be removed from distribution in near future.
Dec  7 14:45:04 np0005549633 network[56716]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 14:45:10 np0005549633 python3.9[57168]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  7 14:45:13 np0005549633 python3.9[57321]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  7 14:45:14 np0005549633 python3.9[57473]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:15 np0005549633 python3.9[57598]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136714.2156143-674-163699357938864/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:16 np0005549633 python3.9[57752]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:17 np0005549633 python3.9[57877]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136715.8569658-719-86524315106478/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:18 np0005549633 python3.9[58031]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:20 np0005549633 python3.9[58185]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:45:21 np0005549633 python3.9[58269]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:45:23 np0005549633 python3.9[58423]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:45:24 np0005549633 python3.9[58507]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:45:24 np0005549633 chronyd[795]: chronyd exiting
Dec  7 14:45:24 np0005549633 systemd[1]: Stopping NTP client/server...
Dec  7 14:45:24 np0005549633 systemd[1]: chronyd.service: Deactivated successfully.
Dec  7 14:45:24 np0005549633 systemd[1]: Stopped NTP client/server.
Dec  7 14:45:24 np0005549633 systemd[1]: Starting NTP client/server...
Dec  7 14:45:24 np0005549633 chronyd[58516]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  7 14:45:24 np0005549633 chronyd[58516]: Frequency -26.011 +/- 0.152 ppm read from /var/lib/chrony/drift
Dec  7 14:45:24 np0005549633 chronyd[58516]: Loaded seccomp filter (level 2)
Dec  7 14:45:24 np0005549633 systemd[1]: Started NTP client/server.
Dec  7 14:45:25 np0005549633 systemd[1]: session-12.scope: Deactivated successfully.
Dec  7 14:45:25 np0005549633 systemd[1]: session-12.scope: Consumed 25.413s CPU time.
Dec  7 14:45:25 np0005549633 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Dec  7 14:45:25 np0005549633 systemd-logind[797]: Removed session 12.
Dec  7 14:45:30 np0005549633 systemd-logind[797]: New session 13 of user zuul.
Dec  7 14:45:30 np0005549633 systemd[1]: Started Session 13 of User zuul.
Dec  7 14:45:31 np0005549633 python3.9[58698]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:32 np0005549633 python3.9[58850]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:33 np0005549633 python3.9[58973]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136731.7074592-62-147264737499884/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:33 np0005549633 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Dec  7 14:45:33 np0005549633 systemd[1]: session-13.scope: Deactivated successfully.
Dec  7 14:45:33 np0005549633 systemd[1]: session-13.scope: Consumed 1.735s CPU time.
Dec  7 14:45:33 np0005549633 systemd-logind[797]: Removed session 13.
Dec  7 14:45:38 np0005549633 systemd-logind[797]: New session 14 of user zuul.
Dec  7 14:45:38 np0005549633 systemd[1]: Started Session 14 of User zuul.
Dec  7 14:45:40 np0005549633 python3.9[59151]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:45:41 np0005549633 python3.9[59307]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:42 np0005549633 python3.9[59482]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:42 np0005549633 python3.9[59605]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765136741.4472332-83-160674716138559/.source.json _original_basename=.rj_djv5w follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:43 np0005549633 python3.9[59757]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:44 np0005549633 python3.9[59880]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136743.4234707-152-202814929902335/.source _original_basename=.2qpgd_yk follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:45 np0005549633 python3.9[60032]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:45:46 np0005549633 python3.9[60184]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:46 np0005549633 python3.9[60307]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765136745.9783196-224-121690849953154/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:45:47 np0005549633 python3.9[60459]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:48 np0005549633 python3.9[60582]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765136747.1389766-224-107932244043273/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  7 14:45:49 np0005549633 python3.9[60734]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:50 np0005549633 python3.9[60886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:51 np0005549633 python3.9[61009]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136749.821441-335-34653362172610/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:52 np0005549633 python3.9[61161]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:52 np0005549633 python3.9[61284]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136751.549693-380-49066506866056/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:53 np0005549633 python3.9[61436]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:45:53 np0005549633 systemd[1]: Reloading.
Dec  7 14:45:54 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:45:54 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:45:54 np0005549633 systemd[1]: Reloading.
Dec  7 14:45:54 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:45:54 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:45:54 np0005549633 systemd[1]: Starting EDPM Container Shutdown...
Dec  7 14:45:54 np0005549633 systemd[1]: Finished EDPM Container Shutdown.
Dec  7 14:45:55 np0005549633 python3.9[61664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:55 np0005549633 python3.9[61787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136754.8246505-449-74328413099758/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:56 np0005549633 python3.9[61939]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:45:57 np0005549633 python3.9[62062]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136756.323224-494-146049276678660/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:45:58 np0005549633 python3.9[62214]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:45:58 np0005549633 systemd[1]: Reloading.
Dec  7 14:45:58 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:45:58 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:45:58 np0005549633 systemd[1]: Reloading.
Dec  7 14:45:58 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:45:58 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:45:58 np0005549633 systemd[1]: Starting Create netns directory...
Dec  7 14:45:58 np0005549633 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  7 14:45:58 np0005549633 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  7 14:45:58 np0005549633 systemd[1]: Finished Create netns directory.
Dec  7 14:46:00 np0005549633 python3.9[62440]: ansible-ansible.builtin.service_facts Invoked
Dec  7 14:46:00 np0005549633 network[62457]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  7 14:46:00 np0005549633 network[62458]: 'network-scripts' will be removed from distribution in near future.
Dec  7 14:46:00 np0005549633 network[62459]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  7 14:46:07 np0005549633 python3.9[62721]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:46:07 np0005549633 systemd[1]: Reloading.
Dec  7 14:46:07 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:46:07 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:46:07 np0005549633 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  7 14:46:08 np0005549633 iptables.init[62760]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  7 14:46:08 np0005549633 iptables.init[62760]: iptables: Flushing firewall rules: [  OK  ]
Dec  7 14:46:08 np0005549633 systemd[1]: iptables.service: Deactivated successfully.
Dec  7 14:46:08 np0005549633 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  7 14:46:08 np0005549633 python3.9[62956]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:46:10 np0005549633 python3.9[63110]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:46:10 np0005549633 systemd[1]: Reloading.
Dec  7 14:46:10 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:46:10 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:46:10 np0005549633 systemd[1]: Starting Netfilter Tables...
Dec  7 14:46:10 np0005549633 systemd[1]: Finished Netfilter Tables.
Dec  7 14:46:12 np0005549633 python3.9[63302]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:46:13 np0005549633 python3.9[63455]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:14 np0005549633 python3.9[63580]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136773.1661532-701-211269474853501/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:15 np0005549633 python3.9[63733]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:46:15 np0005549633 systemd[1]: Reloading OpenSSH server daemon...
Dec  7 14:46:15 np0005549633 systemd[1]: Reloaded OpenSSH server daemon.
Dec  7 14:46:17 np0005549633 python3.9[63889]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:17 np0005549633 python3.9[64041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:18 np0005549633 python3.9[64164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136777.483157-794-122126127852291/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:19 np0005549633 python3.9[64316]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  7 14:46:19 np0005549633 systemd[1]: Starting Time & Date Service...
Dec  7 14:46:19 np0005549633 systemd[1]: Started Time & Date Service.
Dec  7 14:46:20 np0005549633 python3.9[64472]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:21 np0005549633 python3.9[64624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:22 np0005549633 python3.9[64747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136781.1663578-899-166195284235325/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:23 np0005549633 python3.9[64899]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:23 np0005549633 python3.9[65022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765136782.6422749-944-189372811396803/.source.yaml _original_basename=.xyacl6tx follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:24 np0005549633 python3.9[65174]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:25 np0005549633 python3.9[65297]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136784.109937-989-4235354031936/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:25 np0005549633 python3.9[65449]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:46:26 np0005549633 python3.9[65602]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:46:27 np0005549633 python3[65755]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  7 14:46:28 np0005549633 python3.9[65907]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:29 np0005549633 python3.9[66030]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136788.2012389-1106-273421182968272/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:30 np0005549633 python3.9[66182]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:30 np0005549633 python3.9[66305]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136789.7381105-1151-44220416833027/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:31 np0005549633 python3.9[66457]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:32 np0005549633 python3.9[66580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136791.3515077-1196-1964205309596/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:33 np0005549633 python3.9[66732]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:34 np0005549633 python3.9[66855]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136792.7973747-1241-199966546170601/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:35 np0005549633 python3.9[67007]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  7 14:46:35 np0005549633 python3.9[67130]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765136794.5942676-1286-182056043492770/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:36 np0005549633 python3.9[67282]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:37 np0005549633 python3.9[67434]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:46:38 np0005549633 python3.9[67593]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:39 np0005549633 python3.9[67746]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:40 np0005549633 python3.9[67898]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:41 np0005549633 python3.9[68050]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  7 14:46:42 np0005549633 python3.9[68203]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  7 14:46:42 np0005549633 systemd[1]: session-14.scope: Deactivated successfully.
Dec  7 14:46:42 np0005549633 systemd[1]: session-14.scope: Consumed 37.511s CPU time.
Dec  7 14:46:42 np0005549633 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Dec  7 14:46:42 np0005549633 systemd-logind[797]: Removed session 14.
Dec  7 14:46:48 np0005549633 systemd-logind[797]: New session 15 of user zuul.
Dec  7 14:46:48 np0005549633 systemd[1]: Started Session 15 of User zuul.
Dec  7 14:46:49 np0005549633 python3.9[68384]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  7 14:46:50 np0005549633 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  7 14:46:50 np0005549633 python3.9[68536]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:46:51 np0005549633 python3.9[68690]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:46:52 np0005549633 python3.9[68842]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzG0Nc7ckDue6E0yXAA5dvfHxUtZo58lqBBCJIPTphN/auMAo956vmFHavmR4fe22sNDr66v9l5XTXc5CqTw9fwW5udeFHG6wswpVhfoV89HLwRP162eogFLFh93gK4R4LMdpOWwut5dtBWldI7i9uYVsnuV9MX1w98BKiHxhDVxLzTWi9M1dIEtEmlRqv91fBYsqCPfI7eUBllQDD7HSO7lDrXCmJNKBVfvNbrTc33lpf31X0kE6r+DncKJwmjkti+S8ElIq9t3BcxVFKvRnpUsDzZ4ZeLMwsHOiLZ9uuXIorBJ3NCq0jh/vsumuAybnS+qj2qoDP83nz6Nbwn/t/y0m/LazqrUpRV7yBUfUYZlQciB/FFdAVTEmSvxcmVa+plCrWboL/UjIdj/shhKGsmAzxyoBCGuxvOMZf0xpGrpMJyW7AGYRT1F1yooOlWh572rqxVpBW4sc7oO5hVz9t1zNEQgjAUkrDzaYuD8gLU8RsjNMBt6X8vWlBVohoTg0=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBp2PoqfxN1oH4s7T23N50QhUkv8RYqBo5GjmzGx4ofc#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIR65D91/rxys768XyxrnbmOfsLEdxGTWn2TNt5Cs5Rp9Ww09Kr+e5bzB7JxUTZKfKPpfKk/eySS7arrFnItfdY=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgGlb3RaJVnne2iu9IJnvh2mrB2JmLNx9BmnJQmcNI6QT8bmCp30lJZMkTFG7LEsVI5zt5xH0ZF12XABfSPxtPG/GbWY1yNyFpnUefhUhc91gfMXvNJwkbMXQnpx+rpDgiCA9VwVRnzcj/EGPbQ/Le+YqEpblI9JivPWawMrgL1fXUsv1D2mqpsvHwT/P4fuHMjAL/aFAcJA3N8/lk/ke8qedL4ekSLswCH5knYPtr5LElkjF4yr7Rs86Bg2/o0EAhjX6Jp4Evmyf+cA8dqkgM2xK5cc52Bn7TPag8vyAtChg7a0WK1pFR15+R5A+/Rv6WCWBqiCfG1b0D90057RAMLPh+PdSrtID8PgCgrnxk5QkQLPT+RYWu4JLZBbofSIgQZEXArM9yAl9RbRT6bXdHzl4ro2GWLUY3JTdpBMqt6nJDt9aeViPaxtE8tIXKs9t9j6aWiPvMIZ65kylGafwlfOsB4gAktN0MdceNYfeDd6ygOpQsd8wXhDmnd9XezB8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGBZ3m+6JcCz74EPfXML3FU450yFpDbw1xe4qG3DX5Ko#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPBmTHSMBNJmdlwSrbvOSRnmL4XG3YU09cXWiaicnMTF9sslrG43FRcXjz7PN/qS7WTo9+HlMzBhSC+xXOG0ipY=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMq7IgCUxlmA8gLplXmKLYmrQ36FteaVIjzI1i8wxxN/7iMQcuCUnuUxpnUIOaamQP3dsXe+r1n/Pol5tzetx+ApLPfQqvolQLctgYsprhVDXQUO3m//NZ/yAJh31TCuyBXASUkJXG2/53Rs4rCUs4//GWBM1WMoN5wGo6mTib3AzvFVgBxVT3heZWlA0wd298ca57SCiLrlYqtwEqAUyRucUFShhTqvKczYclVGUIilwOc6pHxBHZiTTWBgkwh1jnpyE3fjP3NRuvUl8Ee1zSuSR0hfcKCBl+WF+GD697bR2zMmKPmRy2wFt3PMa+AbcRaAsKzAzaWy/zPyB5BHgQVcfdn9yENcQUkOQV0VYAHms+967klZml4PAcryw/dvleg5XI1hCMaVXBu8r/tztUoHT/NlGIdEmGWRo4p31C4ZMfr6Pvg/Cj0Kq3BE3uVeFOmws+/VbKE98pFD0t+WPT99mtttwtGnJeFogxUT//XK01h3JoHYVBmJLedNC2ld8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILsGpn69Te4rEWsv6ZO3REgCXP2xKTb1dCdOyczIDBcc#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOWkMwZohxnFbnZFhyrcYoX5jWF1irrPbikTmT6MXVccOoYy+4699QECdcbc/JKOY3ScUWXQrD1EVnnd6anfZoo=#012 create=True mode=0644 path=/tmp/ansible.gdgoozlg state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:53 np0005549633 python3.9[68994]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gdgoozlg' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:46:54 np0005549633 python3.9[69148]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gdgoozlg state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:46:54 np0005549633 systemd[1]: session-15.scope: Deactivated successfully.
Dec  7 14:46:54 np0005549633 systemd[1]: session-15.scope: Consumed 3.546s CPU time.
Dec  7 14:46:54 np0005549633 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Dec  7 14:46:54 np0005549633 systemd-logind[797]: Removed session 15.
Dec  7 14:47:00 np0005549633 systemd-logind[797]: New session 16 of user zuul.
Dec  7 14:47:00 np0005549633 systemd[1]: Started Session 16 of User zuul.
Dec  7 14:47:01 np0005549633 python3.9[69326]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:47:02 np0005549633 python3.9[69482]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  7 14:47:03 np0005549633 python3.9[69636]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  7 14:47:05 np0005549633 python3.9[69789]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:47:05 np0005549633 python3.9[69942]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:47:06 np0005549633 python3.9[70096]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:47:07 np0005549633 python3.9[70251]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:47:08 np0005549633 systemd[1]: session-16.scope: Deactivated successfully.
Dec  7 14:47:08 np0005549633 systemd[1]: session-16.scope: Consumed 4.561s CPU time.
Dec  7 14:47:08 np0005549633 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Dec  7 14:47:08 np0005549633 systemd-logind[797]: Removed session 16.
Dec  7 14:47:13 np0005549633 systemd-logind[797]: New session 17 of user zuul.
Dec  7 14:47:13 np0005549633 systemd[1]: Started Session 17 of User zuul.
Dec  7 14:47:14 np0005549633 python3.9[70429]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:47:15 np0005549633 python3.9[70585]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  7 14:47:16 np0005549633 python3.9[70669]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  7 14:47:18 np0005549633 python3.9[70820]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:47:20 np0005549633 python3.9[70971]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  7 14:47:20 np0005549633 python3.9[71121]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:47:20 np0005549633 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 14:47:21 np0005549633 python3.9[71272]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  7 14:47:22 np0005549633 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Dec  7 14:47:22 np0005549633 systemd[1]: session-17.scope: Deactivated successfully.
Dec  7 14:47:22 np0005549633 systemd[1]: session-17.scope: Consumed 6.061s CPU time.
Dec  7 14:47:22 np0005549633 systemd-logind[797]: Removed session 17.
Dec  7 14:47:30 np0005549633 systemd-logind[797]: New session 18 of user zuul.
Dec  7 14:47:30 np0005549633 systemd[1]: Started Session 18 of User zuul.
Dec  7 14:47:33 np0005549633 chronyd[58516]: Selected source 149.56.19.163 (pool.ntp.org)
Dec  7 14:47:36 np0005549633 python3[72039]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:47:38 np0005549633 python3[72134]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 14:47:40 np0005549633 python3[72162]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 14:47:40 np0005549633 python3[72188]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:47:40 np0005549633 kernel: loop: module loaded
Dec  7 14:47:40 np0005549633 kernel: loop3: detected capacity change from 0 to 41943040
Dec  7 14:47:41 np0005549633 python3[72223]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:47:41 np0005549633 lvm[72226]: PV /dev/loop3 not used.
Dec  7 14:47:41 np0005549633 lvm[72236]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:47:41 np0005549633 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  7 14:47:41 np0005549633 lvm[72238]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  7 14:47:41 np0005549633 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  7 14:47:43 np0005549633 python3[72316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:47:44 np0005549633 python3[72389]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765136863.7072017-36859-3048705586722/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:47:45 np0005549633 python3[72439]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  7 14:47:45 np0005549633 systemd[1]: Reloading.
Dec  7 14:47:45 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:47:45 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:47:45 np0005549633 systemd[1]: Starting Ceph OSD losetup...
Dec  7 14:47:45 np0005549633 bash[72479]: /dev/loop3: [64513]:4327949 (/var/lib/ceph-osd-0.img)
Dec  7 14:47:45 np0005549633 systemd[1]: Finished Ceph OSD losetup.
Dec  7 14:47:45 np0005549633 lvm[72482]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:47:45 np0005549633 lvm[72482]: VG ceph_vg0 finished
Dec  7 14:47:47 np0005549633 python3[72506]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  7 14:47:50 np0005549633 python3[72599]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 14:47:53 np0005549633 python3[72656]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  7 14:47:56 np0005549633 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  7 14:47:56 np0005549633 systemd[1]: Starting man-db-cache-update.service...
Dec  7 14:47:56 np0005549633 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  7 14:47:56 np0005549633 systemd[1]: Finished man-db-cache-update.service.
Dec  7 14:47:56 np0005549633 systemd[1]: run-r8a1002b4433741d8b2c795c576e818cb.service: Deactivated successfully.
Dec  7 14:47:57 np0005549633 python3[72771]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 14:47:57 np0005549633 python3[72799]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:47:57 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:47:57 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:47:58 np0005549633 python3[72864]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:47:58 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:47:58 np0005549633 python3[72890]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:47:59 np0005549633 python3[72968]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:47:59 np0005549633 python3[73041]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765136879.2387998-37084-19933598511039/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:48:00 np0005549633 python3[73143]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:48:01 np0005549633 python3[73216]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765136880.4492466-37102-121447228291741/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:48:01 np0005549633 python3[73266]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 14:48:02 np0005549633 python3[73294]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 14:48:02 np0005549633 python3[73322]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 14:48:02 np0005549633 python3[73350]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid a8ac706f-8288-541e-8e56-e1124d9b483d --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:48:03 np0005549633 systemd[1]: Created slice User Slice of UID 42477.
Dec  7 14:48:03 np0005549633 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  7 14:48:03 np0005549633 systemd-logind[797]: New session 19 of user ceph-admin.
Dec  7 14:48:03 np0005549633 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  7 14:48:03 np0005549633 systemd[1]: Starting User Manager for UID 42477...
Dec  7 14:48:03 np0005549633 systemd[73358]: Queued start job for default target Main User Target.
Dec  7 14:48:03 np0005549633 systemd[73358]: Created slice User Application Slice.
Dec  7 14:48:03 np0005549633 systemd[73358]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 14:48:03 np0005549633 systemd[73358]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 14:48:03 np0005549633 systemd[73358]: Reached target Paths.
Dec  7 14:48:03 np0005549633 systemd[73358]: Reached target Timers.
Dec  7 14:48:03 np0005549633 systemd[73358]: Starting D-Bus User Message Bus Socket...
Dec  7 14:48:03 np0005549633 systemd[73358]: Starting Create User's Volatile Files and Directories...
Dec  7 14:48:03 np0005549633 systemd[73358]: Finished Create User's Volatile Files and Directories.
Dec  7 14:48:03 np0005549633 systemd[73358]: Listening on D-Bus User Message Bus Socket.
Dec  7 14:48:03 np0005549633 systemd[73358]: Reached target Sockets.
Dec  7 14:48:03 np0005549633 systemd[73358]: Reached target Basic System.
Dec  7 14:48:03 np0005549633 systemd[73358]: Reached target Main User Target.
Dec  7 14:48:03 np0005549633 systemd[73358]: Startup finished in 137ms.
Dec  7 14:48:03 np0005549633 systemd[1]: Started User Manager for UID 42477.
Dec  7 14:48:03 np0005549633 systemd[1]: Started Session 19 of User ceph-admin.
Dec  7 14:48:03 np0005549633 systemd[1]: session-19.scope: Deactivated successfully.
Dec  7 14:48:03 np0005549633 systemd-logind[797]: Session 19 logged out. Waiting for processes to exit.
Dec  7 14:48:03 np0005549633 systemd-logind[797]: Removed session 19.
Dec  7 14:48:03 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:03 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:05 np0005549633 systemd[1]: var-lib-containers-storage-overlay-compat1496541114-lower\x2dmapped.mount: Deactivated successfully.
Dec  7 14:48:13 np0005549633 systemd[1]: Stopping User Manager for UID 42477...
Dec  7 14:48:13 np0005549633 systemd[73358]: Activating special unit Exit the Session...
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped target Main User Target.
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped target Basic System.
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped target Paths.
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped target Sockets.
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped target Timers.
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  7 14:48:13 np0005549633 systemd[73358]: Closed D-Bus User Message Bus Socket.
Dec  7 14:48:13 np0005549633 systemd[73358]: Stopped Create User's Volatile Files and Directories.
Dec  7 14:48:13 np0005549633 systemd[73358]: Removed slice User Application Slice.
Dec  7 14:48:13 np0005549633 systemd[73358]: Reached target Shutdown.
Dec  7 14:48:13 np0005549633 systemd[73358]: Finished Exit the Session.
Dec  7 14:48:13 np0005549633 systemd[73358]: Reached target Exit the Session.
Dec  7 14:48:13 np0005549633 systemd[1]: user@42477.service: Deactivated successfully.
Dec  7 14:48:13 np0005549633 systemd[1]: Stopped User Manager for UID 42477.
Dec  7 14:48:13 np0005549633 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  7 14:48:13 np0005549633 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  7 14:48:13 np0005549633 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  7 14:48:13 np0005549633 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  7 14:48:13 np0005549633 systemd[1]: Removed slice User Slice of UID 42477.
Dec  7 14:48:31 np0005549633 podman[73452]: 2025-12-07 19:48:31.932934205 +0000 UTC m=+28.284529241 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:31 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73512]: 2025-12-07 19:48:32.006625265 +0000 UTC m=+0.046043442 container create fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62 (image=quay.io/ceph/ceph:v19, name=jolly_sanderson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:32 np0005549633 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck4126967739-merged.mount: Deactivated successfully.
Dec  7 14:48:32 np0005549633 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  7 14:48:32 np0005549633 systemd[1]: Started libpod-conmon-fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62.scope.
Dec  7 14:48:32 np0005549633 podman[73512]: 2025-12-07 19:48:31.98848127 +0000 UTC m=+0.027899467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:32 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:32 np0005549633 podman[73512]: 2025-12-07 19:48:32.118261588 +0000 UTC m=+0.157679786 container init fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62 (image=quay.io/ceph/ceph:v19, name=jolly_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:32 np0005549633 podman[73512]: 2025-12-07 19:48:32.13106436 +0000 UTC m=+0.170482537 container start fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62 (image=quay.io/ceph/ceph:v19, name=jolly_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:48:32 np0005549633 podman[73512]: 2025-12-07 19:48:32.13514017 +0000 UTC m=+0.174558347 container attach fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62 (image=quay.io/ceph/ceph:v19, name=jolly_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:32 np0005549633 jolly_sanderson[73526]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62.scope: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73512]: 2025-12-07 19:48:32.269956662 +0000 UTC m=+0.309374849 container died fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62 (image=quay.io/ceph/ceph:v19, name=jolly_sanderson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:32 np0005549633 podman[73512]: 2025-12-07 19:48:32.313839684 +0000 UTC m=+0.353257911 container remove fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62 (image=quay.io/ceph/ceph:v19, name=jolly_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-conmon-fb315c161891f559ddec0fe1983d62906e25f0990a1e7dd5084ef1551eafca62.scope: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73543]: 2025-12-07 19:48:32.390125044 +0000 UTC m=+0.051210610 container create de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51 (image=quay.io/ceph/ceph:v19, name=awesome_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:48:32 np0005549633 systemd[1]: Started libpod-conmon-de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51.scope.
Dec  7 14:48:32 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:32 np0005549633 podman[73543]: 2025-12-07 19:48:32.360330058 +0000 UTC m=+0.021415604 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:32 np0005549633 podman[73543]: 2025-12-07 19:48:32.470148702 +0000 UTC m=+0.131234328 container init de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51 (image=quay.io/ceph/ceph:v19, name=awesome_sutherland, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 14:48:32 np0005549633 podman[73543]: 2025-12-07 19:48:32.47567255 +0000 UTC m=+0.136758086 container start de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51 (image=quay.io/ceph/ceph:v19, name=awesome_sutherland, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 14:48:32 np0005549633 podman[73543]: 2025-12-07 19:48:32.479276946 +0000 UTC m=+0.140362552 container attach de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51 (image=quay.io/ceph/ceph:v19, name=awesome_sutherland, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec  7 14:48:32 np0005549633 awesome_sutherland[73559]: 167 167
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51.scope: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73543]: 2025-12-07 19:48:32.481401083 +0000 UTC m=+0.142486669 container died de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51 (image=quay.io/ceph/ceph:v19, name=awesome_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 14:48:32 np0005549633 podman[73543]: 2025-12-07 19:48:32.518204736 +0000 UTC m=+0.179290262 container remove de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51 (image=quay.io/ceph/ceph:v19, name=awesome_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-conmon-de09d1f22021f2dd26005e1bdcb907beffcb0456f81b901cf657e57e012abf51.scope: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73576]: 2025-12-07 19:48:32.602850499 +0000 UTC m=+0.056252715 container create a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4 (image=quay.io/ceph/ceph:v19, name=laughing_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 14:48:32 np0005549633 systemd[1]: Started libpod-conmon-a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4.scope.
Dec  7 14:48:32 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:32 np0005549633 podman[73576]: 2025-12-07 19:48:32.667958208 +0000 UTC m=+0.121360454 container init a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4 (image=quay.io/ceph/ceph:v19, name=laughing_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 14:48:32 np0005549633 podman[73576]: 2025-12-07 19:48:32.672456888 +0000 UTC m=+0.125859114 container start a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4 (image=quay.io/ceph/ceph:v19, name=laughing_goldstine, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:48:32 np0005549633 podman[73576]: 2025-12-07 19:48:32.582461473 +0000 UTC m=+0.035863719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:32 np0005549633 podman[73576]: 2025-12-07 19:48:32.677726029 +0000 UTC m=+0.131128275 container attach a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4 (image=quay.io/ceph/ceph:v19, name=laughing_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:48:32 np0005549633 laughing_goldstine[73592]: AQAQ2jVpegLbKRAAYzSoH5WfcglYkV3Io9iIZw==
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4.scope: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73576]: 2025-12-07 19:48:32.706736575 +0000 UTC m=+0.160138781 container died a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4 (image=quay.io/ceph/ceph:v19, name=laughing_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:48:32 np0005549633 podman[73576]: 2025-12-07 19:48:32.738885214 +0000 UTC m=+0.192287420 container remove a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4 (image=quay.io/ceph/ceph:v19, name=laughing_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-conmon-a74a69c50c13f9b0d8b392a55393000f84ba834f89439aeff720469300ce72d4.scope: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73611]: 2025-12-07 19:48:32.804938379 +0000 UTC m=+0.042670071 container create dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720 (image=quay.io/ceph/ceph:v19, name=quizzical_bartik, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 14:48:32 np0005549633 systemd[1]: Started libpod-conmon-dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720.scope.
Dec  7 14:48:32 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:32 np0005549633 podman[73611]: 2025-12-07 19:48:32.859743893 +0000 UTC m=+0.097475585 container init dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720 (image=quay.io/ceph/ceph:v19, name=quizzical_bartik, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:48:32 np0005549633 podman[73611]: 2025-12-07 19:48:32.865588629 +0000 UTC m=+0.103320301 container start dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720 (image=quay.io/ceph/ceph:v19, name=quizzical_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 14:48:32 np0005549633 podman[73611]: 2025-12-07 19:48:32.868694703 +0000 UTC m=+0.106426405 container attach dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720 (image=quay.io/ceph/ceph:v19, name=quizzical_bartik, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:32 np0005549633 podman[73611]: 2025-12-07 19:48:32.788058988 +0000 UTC m=+0.025790680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:32 np0005549633 quizzical_bartik[73628]: AQAQ2jVpaeYjNRAABGehIhGtHyVOwHXu451BWQ==
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720.scope: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73611]: 2025-12-07 19:48:32.895837258 +0000 UTC m=+0.133569010 container died dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720 (image=quay.io/ceph/ceph:v19, name=quizzical_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:32 np0005549633 systemd[1]: var-lib-containers-storage-overlay-741373bd2451a5b789a8b2928c8b41c742267ff68ffab6634c6060f01f79b5fe-merged.mount: Deactivated successfully.
Dec  7 14:48:32 np0005549633 podman[73611]: 2025-12-07 19:48:32.935404585 +0000 UTC m=+0.173136257 container remove dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720 (image=quay.io/ceph/ceph:v19, name=quizzical_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:32 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:32 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:32 np0005549633 systemd[1]: libpod-conmon-dc28bbfece500b325a2cbf8aeb8304d94857603fd317d1aed5d92cad0f4df720.scope: Deactivated successfully.
Dec  7 14:48:33 np0005549633 podman[73646]: 2025-12-07 19:48:33.026577832 +0000 UTC m=+0.058147395 container create 9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b (image=quay.io/ceph/ceph:v19, name=stoic_mclaren, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 14:48:33 np0005549633 systemd[1]: Started libpod-conmon-9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b.scope.
Dec  7 14:48:33 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:33 np0005549633 podman[73646]: 2025-12-07 19:48:33.009528756 +0000 UTC m=+0.041098329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:33 np0005549633 podman[73646]: 2025-12-07 19:48:33.107951937 +0000 UTC m=+0.139521480 container init 9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b (image=quay.io/ceph/ceph:v19, name=stoic_mclaren, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:33 np0005549633 podman[73646]: 2025-12-07 19:48:33.112789896 +0000 UTC m=+0.144359449 container start 9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b (image=quay.io/ceph/ceph:v19, name=stoic_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 14:48:33 np0005549633 podman[73646]: 2025-12-07 19:48:33.116020852 +0000 UTC m=+0.147590405 container attach 9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b (image=quay.io/ceph/ceph:v19, name=stoic_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:33 np0005549633 stoic_mclaren[73662]: AQAR2jVp8m1nCRAA01BYL8EfHepQd9gEKpz+BA==
Dec  7 14:48:33 np0005549633 systemd[1]: libpod-9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b.scope: Deactivated successfully.
Dec  7 14:48:33 np0005549633 podman[73646]: 2025-12-07 19:48:33.162032012 +0000 UTC m=+0.193601575 container died 9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b (image=quay.io/ceph/ceph:v19, name=stoic_mclaren, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 14:48:33 np0005549633 podman[73646]: 2025-12-07 19:48:33.197743307 +0000 UTC m=+0.229312860 container remove 9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b (image=quay.io/ceph/ceph:v19, name=stoic_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:33 np0005549633 systemd[1]: libpod-conmon-9ad7b50aed01157c949eebcb6afb1ae89452040af52f50ed23fdb8df74bdcc5b.scope: Deactivated successfully.
Dec  7 14:48:33 np0005549633 podman[73681]: 2025-12-07 19:48:33.268022435 +0000 UTC m=+0.048567300 container create ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3 (image=quay.io/ceph/ceph:v19, name=goofy_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:48:33 np0005549633 systemd[1]: Started libpod-conmon-ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3.scope.
Dec  7 14:48:33 np0005549633 podman[73681]: 2025-12-07 19:48:33.241163167 +0000 UTC m=+0.021708022 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:33 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:33 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8a5236c2a0bf6225992cd5f963f464d03ba5bc8c1fc66d1174492563382eaa/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:33 np0005549633 podman[73681]: 2025-12-07 19:48:33.361178174 +0000 UTC m=+0.141723019 container init ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3 (image=quay.io/ceph/ceph:v19, name=goofy_bartik, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 14:48:33 np0005549633 podman[73681]: 2025-12-07 19:48:33.367081492 +0000 UTC m=+0.147626317 container start ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3 (image=quay.io/ceph/ceph:v19, name=goofy_bartik, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 14:48:33 np0005549633 podman[73681]: 2025-12-07 19:48:33.370951575 +0000 UTC m=+0.151496400 container attach ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3 (image=quay.io/ceph/ceph:v19, name=goofy_bartik, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 14:48:33 np0005549633 goofy_bartik[73698]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  7 14:48:33 np0005549633 goofy_bartik[73698]: setting min_mon_release = quincy
Dec  7 14:48:33 np0005549633 goofy_bartik[73698]: /usr/bin/monmaptool: set fsid to a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:33 np0005549633 goofy_bartik[73698]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  7 14:48:33 np0005549633 systemd[1]: libpod-ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3.scope: Deactivated successfully.
Dec  7 14:48:33 np0005549633 podman[73705]: 2025-12-07 19:48:33.457782975 +0000 UTC m=+0.022736808 container died ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3 (image=quay.io/ceph/ceph:v19, name=goofy_bartik, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:48:33 np0005549633 podman[73705]: 2025-12-07 19:48:33.491471006 +0000 UTC m=+0.056424839 container remove ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3 (image=quay.io/ceph/ceph:v19, name=goofy_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 14:48:33 np0005549633 systemd[1]: libpod-conmon-ff8098255a1fedc71d5c88fdd071d7ed35d2619fcb3a109ead599c15572609e3.scope: Deactivated successfully.
Dec  7 14:48:33 np0005549633 podman[73721]: 2025-12-07 19:48:33.55484524 +0000 UTC m=+0.034849923 container create d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037 (image=quay.io/ceph/ceph:v19, name=funny_satoshi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:48:33 np0005549633 systemd[1]: Started libpod-conmon-d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037.scope.
Dec  7 14:48:33 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:33 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ed3c55076d5da5a205ea991b12ce8cb4999aa3b4cd6402b403ca2faea9628b/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:33 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ed3c55076d5da5a205ea991b12ce8cb4999aa3b4cd6402b403ca2faea9628b/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:33 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ed3c55076d5da5a205ea991b12ce8cb4999aa3b4cd6402b403ca2faea9628b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:33 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ed3c55076d5da5a205ea991b12ce8cb4999aa3b4cd6402b403ca2faea9628b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:33 np0005549633 podman[73721]: 2025-12-07 19:48:33.615953022 +0000 UTC m=+0.095957755 container init d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037 (image=quay.io/ceph/ceph:v19, name=funny_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:33 np0005549633 podman[73721]: 2025-12-07 19:48:33.622659852 +0000 UTC m=+0.102664535 container start d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037 (image=quay.io/ceph/ceph:v19, name=funny_satoshi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:33 np0005549633 podman[73721]: 2025-12-07 19:48:33.626611998 +0000 UTC m=+0.106616721 container attach d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037 (image=quay.io/ceph/ceph:v19, name=funny_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 14:48:33 np0005549633 podman[73721]: 2025-12-07 19:48:33.538563864 +0000 UTC m=+0.018568577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:33 np0005549633 systemd[1]: libpod-d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037.scope: Deactivated successfully.
Dec  7 14:48:33 np0005549633 podman[73721]: 2025-12-07 19:48:33.684480334 +0000 UTC m=+0.164485017 container died d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037 (image=quay.io/ceph/ceph:v19, name=funny_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:33 np0005549633 podman[73721]: 2025-12-07 19:48:33.729944259 +0000 UTC m=+0.209948972 container remove d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037 (image=quay.io/ceph/ceph:v19, name=funny_satoshi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 14:48:33 np0005549633 systemd[1]: libpod-conmon-d8e8371d37b0f224f96589a38a6a9cf3a001543e278310caae9e4459a3b74037.scope: Deactivated successfully.
Dec  7 14:48:33 np0005549633 systemd[1]: Reloading.
Dec  7 14:48:33 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:48:33 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:48:34 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:34 np0005549633 systemd[1]: Reloading.
Dec  7 14:48:34 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:48:34 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:48:34 np0005549633 systemd[1]: Reached target All Ceph clusters and services.
Dec  7 14:48:34 np0005549633 systemd[1]: Reloading.
Dec  7 14:48:34 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:48:34 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:48:34 np0005549633 systemd[1]: Reached target Ceph cluster a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:48:34 np0005549633 systemd[1]: Reloading.
Dec  7 14:48:34 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:48:34 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:48:34 np0005549633 systemd[1]: Reloading.
Dec  7 14:48:34 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:48:34 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:48:35 np0005549633 systemd[1]: Created slice Slice /system/ceph-a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:48:35 np0005549633 systemd[1]: Reached target System Time Set.
Dec  7 14:48:35 np0005549633 systemd[1]: Reached target System Time Synchronized.
Dec  7 14:48:35 np0005549633 systemd[1]: Starting Ceph mon.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:48:35 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:35 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:35 np0005549633 podman[74017]: 2025-12-07 19:48:35.286790815 +0000 UTC m=+0.042950099 container create 2e2bf7d28cbbb35b1a6004371d8d68efa84a64d73818fe2a27032444f68ddf1a (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0806cc8ed85f9364ffb06187bf0505f5745061dc87fd80b3ff6938f685ec58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0806cc8ed85f9364ffb06187bf0505f5745061dc87fd80b3ff6938f685ec58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0806cc8ed85f9364ffb06187bf0505f5745061dc87fd80b3ff6938f685ec58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d0806cc8ed85f9364ffb06187bf0505f5745061dc87fd80b3ff6938f685ec58/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 podman[74017]: 2025-12-07 19:48:35.358833159 +0000 UTC m=+0.114992443 container init 2e2bf7d28cbbb35b1a6004371d8d68efa84a64d73818fe2a27032444f68ddf1a (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 14:48:35 np0005549633 podman[74017]: 2025-12-07 19:48:35.267396136 +0000 UTC m=+0.023555410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:35 np0005549633 podman[74017]: 2025-12-07 19:48:35.364433149 +0000 UTC m=+0.120592413 container start 2e2bf7d28cbbb35b1a6004371d8d68efa84a64d73818fe2a27032444f68ddf1a (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:35 np0005549633 bash[74017]: 2e2bf7d28cbbb35b1a6004371d8d68efa84a64d73818fe2a27032444f68ddf1a
Dec  7 14:48:35 np0005549633 systemd[1]: Started Ceph mon.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: pidfile_write: ignore empty --pid-file
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: load: jerasure load: lrc 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: RocksDB version: 7.9.2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Git sha 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: DB SUMMARY
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: DB Session ID:  VGKX45SYGFIY8B8OBNX0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: CURRENT file:  CURRENT
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                         Options.error_if_exists: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                       Options.create_if_missing: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                                     Options.env: 0x55957f06bc20
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                                Options.info_log: 0x55957fb78d60
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                              Options.statistics: (nil)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                               Options.use_fsync: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                              Options.db_log_dir: 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                                 Options.wal_dir: 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                    Options.write_buffer_manager: 0x55957fb7d900
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.unordered_write: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                               Options.row_cache: None
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                              Options.wal_filter: None
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.two_write_queues: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.wal_compression: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.atomic_flush: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.max_background_jobs: 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.max_background_compactions: -1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.max_subcompactions: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.max_total_wal_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                          Options.max_open_files: -1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:       Options.compaction_readahead_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Compression algorithms supported:
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kZSTD supported: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kXpressCompression supported: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kZlibCompression supported: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:           Options.merge_operator: 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:        Options.compaction_filter: None
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55957fb78500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55957fb9d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:        Options.write_buffer_size: 33554432
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:  Options.max_write_buffer_number: 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:          Options.compression: NoCompression
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.num_levels: 7
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 63acacd7-c601-437a-ae8a-58b144664c23
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765136915408158, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765136915410584, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765136915, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63acacd7-c601-437a-ae8a-58b144664c23", "db_session_id": "VGKX45SYGFIY8B8OBNX0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765136915410706, "job": 1, "event": "recovery_finished"}
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55957fb9ee00
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: DB pointer 0x55957fca8000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55957fb9d350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@-1(???) e0 preinit fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  7 14:48:35 np0005549633 podman[74037]: 2025-12-07 19:48:35.434576944 +0000 UTC m=+0.038472739 container create ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880 (image=quay.io/ceph/ceph:v19, name=musing_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T19:48:33.416686+0000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : created 2025-12-07T19:48:33.416686+0000
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).mds e1 new map
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-07T19:48:35:442933+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mkfs a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 14:48:35 np0005549633 systemd[1]: Started libpod-conmon-ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880.scope.
Dec  7 14:48:35 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f617cf1990565831891d07087e5f89e7fe822ce3adb1bbcb498675ddc7087c9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f617cf1990565831891d07087e5f89e7fe822ce3adb1bbcb498675ddc7087c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f617cf1990565831891d07087e5f89e7fe822ce3adb1bbcb498675ddc7087c9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 podman[74037]: 2025-12-07 19:48:35.418427802 +0000 UTC m=+0.022323617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:35 np0005549633 podman[74037]: 2025-12-07 19:48:35.516522224 +0000 UTC m=+0.120418019 container init ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880 (image=quay.io/ceph/ceph:v19, name=musing_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:35 np0005549633 podman[74037]: 2025-12-07 19:48:35.522646807 +0000 UTC m=+0.126542602 container start ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880 (image=quay.io/ceph/ceph:v19, name=musing_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 14:48:35 np0005549633 podman[74037]: 2025-12-07 19:48:35.526283715 +0000 UTC m=+0.130179510 container attach ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880 (image=quay.io/ceph/ceph:v19, name=musing_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  7 14:48:35 np0005549633 ceph-mon[74036]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2569650449' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:  cluster:
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    id:     a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    health: HEALTH_OK
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]: 
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:  services:
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    mon: 1 daemons, quorum compute-0 (age 0.283371s)
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    mgr: no daemons active
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    osd: 0 osds: 0 up, 0 in
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]: 
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:  data:
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    pools:   0 pools, 0 pgs
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    objects: 0 objects, 0 B
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    usage:   0 B used, 0 B / 0 B avail
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]:    pgs:     
Dec  7 14:48:35 np0005549633 musing_zhukovsky[74091]: 
Dec  7 14:48:35 np0005549633 systemd[1]: libpod-ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880.scope: Deactivated successfully.
Dec  7 14:48:35 np0005549633 conmon[74091]: conmon ac9f7cfd46825f768d8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880.scope/container/memory.events
Dec  7 14:48:35 np0005549633 podman[74037]: 2025-12-07 19:48:35.751264427 +0000 UTC m=+0.355160222 container died ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880 (image=quay.io/ceph/ceph:v19, name=musing_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Dec  7 14:48:35 np0005549633 podman[74037]: 2025-12-07 19:48:35.788975034 +0000 UTC m=+0.392870829 container remove ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880 (image=quay.io/ceph/ceph:v19, name=musing_zhukovsky, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:35 np0005549633 systemd[1]: libpod-conmon-ac9f7cfd46825f768d8ae4f4e0934d80b483dd43dec57d557fda3b4855002880.scope: Deactivated successfully.
Dec  7 14:48:35 np0005549633 podman[74127]: 2025-12-07 19:48:35.851962118 +0000 UTC m=+0.041238073 container create add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00 (image=quay.io/ceph/ceph:v19, name=zen_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:35 np0005549633 systemd[1]: Started libpod-conmon-add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00.scope.
Dec  7 14:48:35 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1474cbe1cdc087e02f9677909da1dbdef68465fc2491382d40f066eae1cb86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1474cbe1cdc087e02f9677909da1dbdef68465fc2491382d40f066eae1cb86/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1474cbe1cdc087e02f9677909da1dbdef68465fc2491382d40f066eae1cb86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd1474cbe1cdc087e02f9677909da1dbdef68465fc2491382d40f066eae1cb86/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:35 np0005549633 podman[74127]: 2025-12-07 19:48:35.920317665 +0000 UTC m=+0.109593600 container init add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00 (image=quay.io/ceph/ceph:v19, name=zen_lamarr, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:35 np0005549633 podman[74127]: 2025-12-07 19:48:35.831879701 +0000 UTC m=+0.021155636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:35 np0005549633 podman[74127]: 2025-12-07 19:48:35.928169255 +0000 UTC m=+0.117445190 container start add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00 (image=quay.io/ceph/ceph:v19, name=zen_lamarr, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:35 np0005549633 podman[74127]: 2025-12-07 19:48:35.932059849 +0000 UTC m=+0.121335784 container attach add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00 (image=quay.io/ceph/ceph:v19, name=zen_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2124767494' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2124767494' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 14:48:36 np0005549633 zen_lamarr[74143]: 
Dec  7 14:48:36 np0005549633 zen_lamarr[74143]: [global]
Dec  7 14:48:36 np0005549633 zen_lamarr[74143]: #011fsid = a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:36 np0005549633 zen_lamarr[74143]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  7 14:48:36 np0005549633 systemd[1]: libpod-add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00.scope: Deactivated successfully.
Dec  7 14:48:36 np0005549633 podman[74127]: 2025-12-07 19:48:36.160617317 +0000 UTC m=+0.349893252 container died add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00 (image=quay.io/ceph/ceph:v19, name=zen_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:48:36 np0005549633 systemd[1]: var-lib-containers-storage-overlay-cd1474cbe1cdc087e02f9677909da1dbdef68465fc2491382d40f066eae1cb86-merged.mount: Deactivated successfully.
Dec  7 14:48:36 np0005549633 podman[74127]: 2025-12-07 19:48:36.200228445 +0000 UTC m=+0.389504360 container remove add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00 (image=quay.io/ceph/ceph:v19, name=zen_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:36 np0005549633 systemd[1]: libpod-conmon-add1236247ff6b2ec19820d25c4b6959d93eff3dec59163d0874464cd09eec00.scope: Deactivated successfully.
Dec  7 14:48:36 np0005549633 podman[74181]: 2025-12-07 19:48:36.270852713 +0000 UTC m=+0.046084783 container create 54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09 (image=quay.io/ceph/ceph:v19, name=gallant_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 14:48:36 np0005549633 systemd[1]: Started libpod-conmon-54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09.scope.
Dec  7 14:48:36 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:36 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f07ea53a5b47afaf0c5a5d3656c2a08d0ddca98c97c1901f2704db42fe07f13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:36 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f07ea53a5b47afaf0c5a5d3656c2a08d0ddca98c97c1901f2704db42fe07f13/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:36 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f07ea53a5b47afaf0c5a5d3656c2a08d0ddca98c97c1901f2704db42fe07f13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:36 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f07ea53a5b47afaf0c5a5d3656c2a08d0ddca98c97c1901f2704db42fe07f13/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:36 np0005549633 podman[74181]: 2025-12-07 19:48:36.249953354 +0000 UTC m=+0.025185444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:36 np0005549633 podman[74181]: 2025-12-07 19:48:36.357753585 +0000 UTC m=+0.132985685 container init 54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09 (image=quay.io/ceph/ceph:v19, name=gallant_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:36 np0005549633 podman[74181]: 2025-12-07 19:48:36.363334274 +0000 UTC m=+0.138566374 container start 54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09 (image=quay.io/ceph/ceph:v19, name=gallant_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 14:48:36 np0005549633 podman[74181]: 2025-12-07 19:48:36.367371681 +0000 UTC m=+0.142603751 container attach 54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09 (image=quay.io/ceph/ceph:v19, name=gallant_moore, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: from='client.? 192.168.122.100:0/2124767494' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: from='client.? 192.168.122.100:0/2124767494' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196747344' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:48:36 np0005549633 systemd[1]: libpod-54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09.scope: Deactivated successfully.
Dec  7 14:48:36 np0005549633 podman[74181]: 2025-12-07 19:48:36.568088736 +0000 UTC m=+0.343320806 container died 54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09 (image=quay.io/ceph/ceph:v19, name=gallant_moore, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 14:48:36 np0005549633 systemd[1]: var-lib-containers-storage-overlay-7f07ea53a5b47afaf0c5a5d3656c2a08d0ddca98c97c1901f2704db42fe07f13-merged.mount: Deactivated successfully.
Dec  7 14:48:36 np0005549633 podman[74181]: 2025-12-07 19:48:36.605098535 +0000 UTC m=+0.380330605 container remove 54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09 (image=quay.io/ceph/ceph:v19, name=gallant_moore, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 14:48:36 np0005549633 systemd[1]: libpod-conmon-54883f19ae9f0c1e2dc26e21593ed408505f5803902148179f2343cea259bb09.scope: Deactivated successfully.
Dec  7 14:48:36 np0005549633 systemd[1]: Stopping Ceph mon.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: mon.compute-0@0(leader) e1 shutdown
Dec  7 14:48:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0[74032]: 2025-12-07T19:48:36.803+0000 7feec7e35640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  7 14:48:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0[74032]: 2025-12-07T19:48:36.803+0000 7feec7e35640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  7 14:48:36 np0005549633 ceph-mon[74036]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  7 14:48:36 np0005549633 podman[74266]: 2025-12-07 19:48:36.968188158 +0000 UTC m=+0.199900903 container died 2e2bf7d28cbbb35b1a6004371d8d68efa84a64d73818fe2a27032444f68ddf1a (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:36 np0005549633 systemd[1]: var-lib-containers-storage-overlay-4d0806cc8ed85f9364ffb06187bf0505f5745061dc87fd80b3ff6938f685ec58-merged.mount: Deactivated successfully.
Dec  7 14:48:37 np0005549633 podman[74266]: 2025-12-07 19:48:37.007714224 +0000 UTC m=+0.239426949 container remove 2e2bf7d28cbbb35b1a6004371d8d68efa84a64d73818fe2a27032444f68ddf1a (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 14:48:37 np0005549633 bash[74266]: ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0
Dec  7 14:48:37 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:37 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:37 np0005549633 systemd[1]: ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@mon.compute-0.service: Deactivated successfully.
Dec  7 14:48:37 np0005549633 systemd[1]: Stopped Ceph mon.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:48:37 np0005549633 systemd[1]: Starting Ceph mon.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:48:37 np0005549633 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  7 14:48:37 np0005549633 podman[74365]: 2025-12-07 19:48:37.455571453 +0000 UTC m=+0.048065456 container create a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 14:48:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad721848b009be448f92ed0ed5fcd843109ff0a41d48e308a7d1a76f0567f01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad721848b009be448f92ed0ed5fcd843109ff0a41d48e308a7d1a76f0567f01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad721848b009be448f92ed0ed5fcd843109ff0a41d48e308a7d1a76f0567f01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cad721848b009be448f92ed0ed5fcd843109ff0a41d48e308a7d1a76f0567f01/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:37 np0005549633 podman[74365]: 2025-12-07 19:48:37.525891342 +0000 UTC m=+0.118385385 container init a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:37 np0005549633 podman[74365]: 2025-12-07 19:48:37.433229176 +0000 UTC m=+0.025723279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:37 np0005549633 podman[74365]: 2025-12-07 19:48:37.537682117 +0000 UTC m=+0.130176130 container start a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 14:48:37 np0005549633 bash[74365]: a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd
Dec  7 14:48:37 np0005549633 systemd[1]: Started Ceph mon.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: pidfile_write: ignore empty --pid-file
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: load: jerasure load: lrc 
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: RocksDB version: 7.9.2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Git sha 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: DB SUMMARY
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: DB Session ID:  ORNL7KHN9J7Q3V6MXI96
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: CURRENT file:  CURRENT
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58743 ; 
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                         Options.error_if_exists: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                       Options.create_if_missing: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                                     Options.env: 0x560b6211fc20
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                                Options.info_log: 0x560b63d3bac0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                              Options.statistics: (nil)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                               Options.use_fsync: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                              Options.db_log_dir: 
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                                 Options.wal_dir: 
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                    Options.write_buffer_manager: 0x560b63d3f900
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.unordered_write: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                               Options.row_cache: None
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                              Options.wal_filter: None
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.two_write_queues: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.wal_compression: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.atomic_flush: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.max_background_jobs: 2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.max_background_compactions: -1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.max_subcompactions: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.max_total_wal_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                          Options.max_open_files: -1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:       Options.compaction_readahead_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Compression algorithms supported:
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kZSTD supported: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kXpressCompression supported: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kZlibCompression supported: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:           Options.merge_operator: 
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:        Options.compaction_filter: None
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560b63d3aaa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x560b63d5f350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:        Options.write_buffer_size: 33554432
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:  Options.max_write_buffer_number: 2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:          Options.compression: NoCompression
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.num_levels: 7
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 63acacd7-c601-437a-ae8a-58b144664c23
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765136917608690, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765136917614123, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56968, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54485, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765136917, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63acacd7-c601-437a-ae8a-58b144664c23", "db_session_id": "ORNL7KHN9J7Q3V6MXI96", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765136917614337, "job": 1, "event": "recovery_finished"}
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560b63d60e00
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: DB pointer 0x560b63e6a000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.2      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x560b63d5f350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???) e1 preinit fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???).mds e1 new map
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-07T19:48:35:442933+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  7 14:48:37 np0005549633 podman[74385]: 2025-12-07 19:48:37.637428513 +0000 UTC m=+0.061229687 container create 2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2 (image=quay.io/ceph/ceph:v19, name=exciting_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T19:48:33.416686+0000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : created 2025-12-07T19:48:33.416686+0000
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  7 14:48:37 np0005549633 systemd[1]: Started libpod-conmon-2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2.scope.
Dec  7 14:48:37 np0005549633 podman[74385]: 2025-12-07 19:48:37.604322868 +0000 UTC m=+0.028124152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  7 14:48:37 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4cf8eff866249f2d10d676e68a19aa114f77d806465c2817a0450a221742ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4cf8eff866249f2d10d676e68a19aa114f77d806465c2817a0450a221742ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a4cf8eff866249f2d10d676e68a19aa114f77d806465c2817a0450a221742ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:37 np0005549633 podman[74385]: 2025-12-07 19:48:37.739772158 +0000 UTC m=+0.163573352 container init 2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2 (image=quay.io/ceph/ceph:v19, name=exciting_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 14:48:37 np0005549633 podman[74385]: 2025-12-07 19:48:37.753946077 +0000 UTC m=+0.177747291 container start 2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2 (image=quay.io/ceph/ceph:v19, name=exciting_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 14:48:37 np0005549633 podman[74385]: 2025-12-07 19:48:37.758398156 +0000 UTC m=+0.182199350 container attach 2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2 (image=quay.io/ceph/ceph:v19, name=exciting_gagarin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec  7 14:48:37 np0005549633 systemd[1]: libpod-2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2.scope: Deactivated successfully.
Dec  7 14:48:37 np0005549633 podman[74385]: 2025-12-07 19:48:37.979019011 +0000 UTC m=+0.402820205 container died 2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2 (image=quay.io/ceph/ceph:v19, name=exciting_gagarin, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  7 14:48:38 np0005549633 podman[74385]: 2025-12-07 19:48:38.019613256 +0000 UTC m=+0.443414430 container remove 2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2 (image=quay.io/ceph/ceph:v19, name=exciting_gagarin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 14:48:38 np0005549633 systemd[1]: libpod-conmon-2aa8bba43477903d6510fc09f15a2f1dff9dd39b8a9ca84389cd876beae11bf2.scope: Deactivated successfully.
Dec  7 14:48:38 np0005549633 podman[74478]: 2025-12-07 19:48:38.085438096 +0000 UTC m=+0.045043608 container create c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c (image=quay.io/ceph/ceph:v19, name=recursing_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 14:48:38 np0005549633 systemd[1]: Started libpod-conmon-c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c.scope.
Dec  7 14:48:38 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:38 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70aa0460de1164edcf3066a06057d3d7a31af0c5435b650151ca344a74456f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:38 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70aa0460de1164edcf3066a06057d3d7a31af0c5435b650151ca344a74456f5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:38 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e70aa0460de1164edcf3066a06057d3d7a31af0c5435b650151ca344a74456f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:38 np0005549633 podman[74478]: 2025-12-07 19:48:38.067813618 +0000 UTC m=+0.027419140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:38 np0005549633 podman[74478]: 2025-12-07 19:48:38.177404761 +0000 UTC m=+0.137010273 container init c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c (image=quay.io/ceph/ceph:v19, name=recursing_nobel, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 14:48:38 np0005549633 podman[74478]: 2025-12-07 19:48:38.184266681 +0000 UTC m=+0.143872193 container start c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c (image=quay.io/ceph/ceph:v19, name=recursing_nobel, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 14:48:38 np0005549633 podman[74478]: 2025-12-07 19:48:38.188358194 +0000 UTC m=+0.147963696 container attach c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c (image=quay.io/ceph/ceph:v19, name=recursing_nobel, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec  7 14:48:38 np0005549633 systemd[1]: libpod-c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c.scope: Deactivated successfully.
Dec  7 14:48:38 np0005549633 podman[74478]: 2025-12-07 19:48:38.397903073 +0000 UTC m=+0.357508585 container died c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c (image=quay.io/ceph/ceph:v19, name=recursing_nobel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:38 np0005549633 systemd[1]: var-lib-containers-storage-overlay-e70aa0460de1164edcf3066a06057d3d7a31af0c5435b650151ca344a74456f5-merged.mount: Deactivated successfully.
Dec  7 14:48:38 np0005549633 podman[74478]: 2025-12-07 19:48:38.430851074 +0000 UTC m=+0.390456566 container remove c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c (image=quay.io/ceph/ceph:v19, name=recursing_nobel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:38 np0005549633 systemd[1]: libpod-conmon-c4129288070a6da06fad32be8d4f7d891f8a7e27059d6d49e32972306af46b0c.scope: Deactivated successfully.
Dec  7 14:48:38 np0005549633 systemd[1]: Reloading.
Dec  7 14:48:38 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:48:38 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:48:38 np0005549633 systemd[1]: Reloading.
Dec  7 14:48:38 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:48:38 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:48:38 np0005549633 systemd[1]: Starting Ceph mgr.compute-0.dyzcyj for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:48:39 np0005549633 podman[74661]: 2025-12-07 19:48:39.196887981 +0000 UTC m=+0.054351125 container create a557fd32ab2dae8c12fc6682d65ddd7ffbb5f8fa4e490450d665c11a40794aaf (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 14:48:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d86d14f5e415935f1d803ff8dd1efc8f6db03ead7fcec404393a8d2c06685a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d86d14f5e415935f1d803ff8dd1efc8f6db03ead7fcec404393a8d2c06685a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d86d14f5e415935f1d803ff8dd1efc8f6db03ead7fcec404393a8d2c06685a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69d86d14f5e415935f1d803ff8dd1efc8f6db03ead7fcec404393a8d2c06685a/merged/var/lib/ceph/mgr/ceph-compute-0.dyzcyj supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:39 np0005549633 podman[74661]: 2025-12-07 19:48:39.26331428 +0000 UTC m=+0.120777424 container init a557fd32ab2dae8c12fc6682d65ddd7ffbb5f8fa4e490450d665c11a40794aaf (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:39 np0005549633 podman[74661]: 2025-12-07 19:48:39.169940516 +0000 UTC m=+0.027403710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:39 np0005549633 podman[74661]: 2025-12-07 19:48:39.268158483 +0000 UTC m=+0.125621627 container start a557fd32ab2dae8c12fc6682d65ddd7ffbb5f8fa4e490450d665c11a40794aaf (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:48:39 np0005549633 bash[74661]: a557fd32ab2dae8c12fc6682d65ddd7ffbb5f8fa4e490450d665c11a40794aaf
Dec  7 14:48:39 np0005549633 systemd[1]: Started Ceph mgr.compute-0.dyzcyj for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: pidfile_write: ignore empty --pid-file
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'alerts'
Dec  7 14:48:39 np0005549633 podman[74681]: 2025-12-07 19:48:39.373411346 +0000 UTC m=+0.055584489 container create 7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361 (image=quay.io/ceph/ceph:v19, name=happy_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:39 np0005549633 systemd[1]: Started libpod-conmon-7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361.scope.
Dec  7 14:48:39 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333149a5dc488a7b37070b2098a41624dfc674d15c5629a6698584bc3d840075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333149a5dc488a7b37070b2098a41624dfc674d15c5629a6698584bc3d840075/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/333149a5dc488a7b37070b2098a41624dfc674d15c5629a6698584bc3d840075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:39 np0005549633 podman[74681]: 2025-12-07 19:48:39.35546373 +0000 UTC m=+0.037636893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'balancer'
Dec  7 14:48:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:39.450+0000 7f3330c6e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:48:39 np0005549633 podman[74681]: 2025-12-07 19:48:39.476117498 +0000 UTC m=+0.158290841 container init 7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361 (image=quay.io/ceph/ceph:v19, name=happy_neumann, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Dec  7 14:48:39 np0005549633 podman[74681]: 2025-12-07 19:48:39.484778298 +0000 UTC m=+0.166951451 container start 7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361 (image=quay.io/ceph/ceph:v19, name=happy_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:48:39 np0005549633 podman[74681]: 2025-12-07 19:48:39.488486331 +0000 UTC m=+0.170659534 container attach 7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361 (image=quay.io/ceph/ceph:v19, name=happy_neumann, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:48:39 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'cephadm'
Dec  7 14:48:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:39.548+0000 7f3330c6e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:48:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 14:48:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761606236' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 14:48:39 np0005549633 happy_neumann[74717]: 
Dec  7 14:48:39 np0005549633 happy_neumann[74717]: {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "fsid": "a8ac706f-8288-541e-8e56-e1124d9b483d",
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "health": {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "status": "HEALTH_OK",
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "checks": {},
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "mutes": []
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    },
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "election_epoch": 5,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "quorum": [
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        0
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    ],
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "quorum_names": [
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "compute-0"
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    ],
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "quorum_age": 2,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "monmap": {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "epoch": 1,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "min_mon_release_name": "squid",
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_mons": 1
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    },
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "osdmap": {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "epoch": 1,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_osds": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_up_osds": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "osd_up_since": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_in_osds": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "osd_in_since": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_remapped_pgs": 0
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    },
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "pgmap": {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "pgs_by_state": [],
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_pgs": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_pools": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_objects": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "data_bytes": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "bytes_used": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "bytes_avail": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "bytes_total": 0
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    },
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "fsmap": {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "epoch": 1,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "btime": "2025-12-07T19:48:35:442933+0000",
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "by_rank": [],
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "up:standby": 0
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    },
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "mgrmap": {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "available": false,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "num_standbys": 0,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "modules": [
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:            "iostat",
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:            "nfs",
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:            "restful"
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        ],
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "services": {}
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    },
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "servicemap": {
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "epoch": 1,
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "modified": "2025-12-07T19:48:35.445282+0000",
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:        "services": {}
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    },
Dec  7 14:48:39 np0005549633 happy_neumann[74717]:    "progress_events": {}
Dec  7 14:48:39 np0005549633 happy_neumann[74717]: }
Dec  7 14:48:39 np0005549633 systemd[1]: libpod-7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361.scope: Deactivated successfully.
Dec  7 14:48:39 np0005549633 podman[74743]: 2025-12-07 19:48:39.732163384 +0000 UTC m=+0.031164114 container died 7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361 (image=quay.io/ceph/ceph:v19, name=happy_neumann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 14:48:39 np0005549633 systemd[1]: var-lib-containers-storage-overlay-333149a5dc488a7b37070b2098a41624dfc674d15c5629a6698584bc3d840075-merged.mount: Deactivated successfully.
Dec  7 14:48:39 np0005549633 podman[74743]: 2025-12-07 19:48:39.77828029 +0000 UTC m=+0.077280970 container remove 7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361 (image=quay.io/ceph/ceph:v19, name=happy_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:39 np0005549633 systemd[1]: libpod-conmon-7db41dbbff63fdf6a105043645d3b69e654854d71a1e31fd4659a45fc9217361.scope: Deactivated successfully.
Dec  7 14:48:40 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'crash'
Dec  7 14:48:40 np0005549633 ceph-mgr[74680]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:48:40 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'dashboard'
Dec  7 14:48:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:40.402+0000 7f3330c6e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:48:40 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'devicehealth'
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 14:48:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:41.040+0000 7f3330c6e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 14:48:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 14:48:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  from numpy import show_config as show_numpy_config
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:41.219+0000 7f3330c6e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'influx'
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:41.294+0000 7f3330c6e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'insights'
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'iostat'
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'k8sevents'
Dec  7 14:48:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:41.434+0000 7f3330c6e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'localpool'
Dec  7 14:48:41 np0005549633 podman[74770]: 2025-12-07 19:48:41.883330819 +0000 UTC m=+0.065617277 container create bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09 (image=quay.io/ceph/ceph:v19, name=inspiring_babbage, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 14:48:41 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 14:48:41 np0005549633 systemd[1]: Started libpod-conmon-bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09.scope.
Dec  7 14:48:41 np0005549633 podman[74770]: 2025-12-07 19:48:41.856408445 +0000 UTC m=+0.038694903 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:41 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ffaca89e7a9747fbc31d6e02e82008123d4697542a9c6658567e588909d38c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ffaca89e7a9747fbc31d6e02e82008123d4697542a9c6658567e588909d38c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96ffaca89e7a9747fbc31d6e02e82008123d4697542a9c6658567e588909d38c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:41 np0005549633 podman[74770]: 2025-12-07 19:48:41.99540085 +0000 UTC m=+0.177687318 container init bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09 (image=quay.io/ceph/ceph:v19, name=inspiring_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 14:48:42 np0005549633 podman[74770]: 2025-12-07 19:48:42.003454424 +0000 UTC m=+0.185740882 container start bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09 (image=quay.io/ceph/ceph:v19, name=inspiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:42 np0005549633 podman[74770]: 2025-12-07 19:48:42.011434984 +0000 UTC m=+0.193721432 container attach bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09 (image=quay.io/ceph/ceph:v19, name=inspiring_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mirroring'
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'nfs'
Dec  7 14:48:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 14:48:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1121666680' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]: 
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]: {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "fsid": "a8ac706f-8288-541e-8e56-e1124d9b483d",
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "health": {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "status": "HEALTH_OK",
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "checks": {},
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "mutes": []
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    },
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "election_epoch": 5,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "quorum": [
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        0
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    ],
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "quorum_names": [
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "compute-0"
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    ],
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "quorum_age": 4,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "monmap": {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "epoch": 1,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "min_mon_release_name": "squid",
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_mons": 1
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    },
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "osdmap": {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "epoch": 1,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_osds": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_up_osds": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "osd_up_since": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_in_osds": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "osd_in_since": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_remapped_pgs": 0
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    },
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "pgmap": {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "pgs_by_state": [],
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_pgs": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_pools": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_objects": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "data_bytes": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "bytes_used": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "bytes_avail": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "bytes_total": 0
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    },
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "fsmap": {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "epoch": 1,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "btime": "2025-12-07T19:48:35:442933+0000",
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "by_rank": [],
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "up:standby": 0
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    },
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "mgrmap": {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "available": false,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "num_standbys": 0,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "modules": [
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:            "iostat",
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:            "nfs",
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:            "restful"
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        ],
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "services": {}
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    },
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "servicemap": {
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "epoch": 1,
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "modified": "2025-12-07T19:48:35.445282+0000",
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:        "services": {}
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    },
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]:    "progress_events": {}
Dec  7 14:48:42 np0005549633 inspiring_babbage[74789]: }
Dec  7 14:48:42 np0005549633 systemd[1]: libpod-bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09.scope: Deactivated successfully.
Dec  7 14:48:42 np0005549633 podman[74815]: 2025-12-07 19:48:42.314151661 +0000 UTC m=+0.033488738 container died bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09 (image=quay.io/ceph/ceph:v19, name=inspiring_babbage, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 14:48:42 np0005549633 systemd[1]: var-lib-containers-storage-overlay-96ffaca89e7a9747fbc31d6e02e82008123d4697542a9c6658567e588909d38c-merged.mount: Deactivated successfully.
Dec  7 14:48:42 np0005549633 podman[74815]: 2025-12-07 19:48:42.36689426 +0000 UTC m=+0.086231337 container remove bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09 (image=quay.io/ceph/ceph:v19, name=inspiring_babbage, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:42 np0005549633 systemd[1]: libpod-conmon-bee3c1290445272da054ac3219ae95c63e8d697dc17f6dc2e126690bf567ce09.scope: Deactivated successfully.
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'orchestrator'
Dec  7 14:48:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:42.454+0000 7f3330c6e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:42.676+0000 7f3330c6e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:42.751+0000 7f3330c6e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_support'
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 14:48:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:42.823+0000 7f3330c6e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:42.901+0000 7f3330c6e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'progress'
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:48:42 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'prometheus'
Dec  7 14:48:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:42.969+0000 7f3330c6e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:48:43 np0005549633 ceph-mgr[74680]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:48:43 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rbd_support'
Dec  7 14:48:43 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:43.296+0000 7f3330c6e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:48:43 np0005549633 ceph-mgr[74680]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:48:43 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'restful'
Dec  7 14:48:43 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:43.394+0000 7f3330c6e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:48:43 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rgw'
Dec  7 14:48:43 np0005549633 ceph-mgr[74680]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:48:43 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rook'
Dec  7 14:48:43 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:43.847+0000 7f3330c6e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:44.427+0000 7f3330c6e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'selftest'
Dec  7 14:48:44 np0005549633 podman[74831]: 2025-12-07 19:48:44.452318417 +0000 UTC m=+0.048539694 container create aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62 (image=quay.io/ceph/ceph:v19, name=exciting_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:44 np0005549633 systemd[1]: Started libpod-conmon-aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62.scope.
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'snap_schedule'
Dec  7 14:48:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:44.506+0000 7f3330c6e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:44 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6365580ff319d389f5edc3f55f17f664b0f0d214888da44825d23d9487f532/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:44 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6365580ff319d389f5edc3f55f17f664b0f0d214888da44825d23d9487f532/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:44 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba6365580ff319d389f5edc3f55f17f664b0f0d214888da44825d23d9487f532/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:44 np0005549633 podman[74831]: 2025-12-07 19:48:44.434068892 +0000 UTC m=+0.030290159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:44 np0005549633 podman[74831]: 2025-12-07 19:48:44.541397402 +0000 UTC m=+0.137618719 container init aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62 (image=quay.io/ceph/ceph:v19, name=exciting_maxwell, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 14:48:44 np0005549633 podman[74831]: 2025-12-07 19:48:44.549604159 +0000 UTC m=+0.145825436 container start aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62 (image=quay.io/ceph/ceph:v19, name=exciting_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 14:48:44 np0005549633 podman[74831]: 2025-12-07 19:48:44.553697192 +0000 UTC m=+0.149918469 container attach aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62 (image=quay.io/ceph/ceph:v19, name=exciting_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'stats'
Dec  7 14:48:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:44.588+0000 7f3330c6e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'status'
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telegraf'
Dec  7 14:48:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:44.739+0000 7f3330c6e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 14:48:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1004806097' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]: 
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]: {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "fsid": "a8ac706f-8288-541e-8e56-e1124d9b483d",
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "health": {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "status": "HEALTH_OK",
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "checks": {},
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "mutes": []
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    },
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "election_epoch": 5,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "quorum": [
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        0
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    ],
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "quorum_names": [
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "compute-0"
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    ],
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "quorum_age": 7,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "monmap": {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "epoch": 1,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "min_mon_release_name": "squid",
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_mons": 1
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    },
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "osdmap": {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "epoch": 1,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_osds": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_up_osds": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "osd_up_since": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_in_osds": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "osd_in_since": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_remapped_pgs": 0
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    },
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "pgmap": {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "pgs_by_state": [],
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_pgs": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_pools": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_objects": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "data_bytes": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "bytes_used": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "bytes_avail": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "bytes_total": 0
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    },
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "fsmap": {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "epoch": 1,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "btime": "2025-12-07T19:48:35:442933+0000",
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "by_rank": [],
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "up:standby": 0
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    },
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "mgrmap": {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "available": false,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "num_standbys": 0,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "modules": [
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:            "iostat",
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:            "nfs",
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:            "restful"
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        ],
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "services": {}
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    },
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "servicemap": {
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "epoch": 1,
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "modified": "2025-12-07T19:48:35.445282+0000",
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:        "services": {}
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    },
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]:    "progress_events": {}
Dec  7 14:48:44 np0005549633 exciting_maxwell[74847]: }
Dec  7 14:48:44 np0005549633 systemd[1]: libpod-aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62.scope: Deactivated successfully.
Dec  7 14:48:44 np0005549633 podman[74831]: 2025-12-07 19:48:44.81417073 +0000 UTC m=+0.410392007 container died aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62 (image=quay.io/ceph/ceph:v19, name=exciting_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telemetry'
Dec  7 14:48:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:44.814+0000 7f3330c6e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 systemd[1]: var-lib-containers-storage-overlay-ba6365580ff319d389f5edc3f55f17f664b0f0d214888da44825d23d9487f532-merged.mount: Deactivated successfully.
Dec  7 14:48:44 np0005549633 podman[74831]: 2025-12-07 19:48:44.864492022 +0000 UTC m=+0.460713269 container remove aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62 (image=quay.io/ceph/ceph:v19, name=exciting_maxwell, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 14:48:44 np0005549633 systemd[1]: libpod-conmon-aa45413846980ab705f8686a970f5f34767052f38c2fc9e50013851028706b62.scope: Deactivated successfully.
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:48:44 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 14:48:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:44.969+0000 7f3330c6e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'volumes'
Dec  7 14:48:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:45.179+0000 7f3330c6e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'zabbix'
Dec  7 14:48:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:45.429+0000 7f3330c6e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:48:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:45.499+0000 7f3330c6e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: ms_deliver_dispatch: unhandled message 0x563660cfa9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dyzcyj
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map Activating!
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.dyzcyj(active, starting, since 0.0115435s)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map I am now activating
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"} v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: balancer
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: crash
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Manager daemon compute-0.dyzcyj is now available
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [balancer INFO root] Starting
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: devicehealth
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:48:45
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [balancer INFO root] do_upmap
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [balancer INFO root] No pools available
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Starting
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: iostat
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: nfs
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: orchestrator
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: pg_autoscaler
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: progress
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [progress INFO root] Loading...
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [progress INFO root] No stored events to load
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded [] historic events
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] recovery thread starting
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] starting setup
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: rbd_support
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: restful
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: status
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"} v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: telemetry
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [restful WARNING root] server not running: no certificate configured
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] PerfHandler: starting
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TaskHandler: starting
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"} v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] setup complete
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec  7 14:48:45 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: volumes
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: Activating manager daemon compute-0.dyzcyj
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: Manager daemon compute-0.dyzcyj is now available
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:45 np0005549633 ceph-mon[74384]: from='mgr.14102 192.168.122.100:0/3357888113' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:46 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.dyzcyj(active, since 1.02372s)
Dec  7 14:48:46 np0005549633 podman[74964]: 2025-12-07 19:48:46.959069752 +0000 UTC m=+0.055480126 container create 1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7 (image=quay.io/ceph/ceph:v19, name=compassionate_moore, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 14:48:47 np0005549633 systemd[1]: Started libpod-conmon-1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7.scope.
Dec  7 14:48:47 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9482e075f8c2680da9907e470cee4911f5305e2e16d4ad34637f33c010cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9482e075f8c2680da9907e470cee4911f5305e2e16d4ad34637f33c010cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fefe9482e075f8c2680da9907e470cee4911f5305e2e16d4ad34637f33c010cd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:47 np0005549633 podman[74964]: 2025-12-07 19:48:46.938344268 +0000 UTC m=+0.034754632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:47 np0005549633 podman[74964]: 2025-12-07 19:48:47.047024346 +0000 UTC m=+0.143434700 container init 1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7 (image=quay.io/ceph/ceph:v19, name=compassionate_moore, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 14:48:47 np0005549633 podman[74964]: 2025-12-07 19:48:47.054215735 +0000 UTC m=+0.150626109 container start 1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7 (image=quay.io/ceph/ceph:v19, name=compassionate_moore, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 14:48:47 np0005549633 podman[74964]: 2025-12-07 19:48:47.068379627 +0000 UTC m=+0.164790061 container attach 1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7 (image=quay.io/ceph/ceph:v19, name=compassionate_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 14:48:47 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:48:47 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.dyzcyj(active, since 2s)
Dec  7 14:48:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  7 14:48:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629811165' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]: 
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]: {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "fsid": "a8ac706f-8288-541e-8e56-e1124d9b483d",
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "health": {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "status": "HEALTH_OK",
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "checks": {},
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "mutes": []
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    },
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "election_epoch": 5,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "quorum": [
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        0
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    ],
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "quorum_names": [
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "compute-0"
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    ],
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "quorum_age": 9,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "monmap": {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "epoch": 1,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "min_mon_release_name": "squid",
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_mons": 1
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    },
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "osdmap": {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "epoch": 1,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_osds": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_up_osds": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "osd_up_since": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_in_osds": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "osd_in_since": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_remapped_pgs": 0
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    },
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "pgmap": {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "pgs_by_state": [],
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_pgs": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_pools": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_objects": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "data_bytes": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "bytes_used": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "bytes_avail": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "bytes_total": 0
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    },
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "fsmap": {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "epoch": 1,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "btime": "2025-12-07T19:48:35:442933+0000",
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "by_rank": [],
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "up:standby": 0
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    },
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "mgrmap": {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "available": true,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "num_standbys": 0,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "modules": [
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:            "iostat",
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:            "nfs",
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:            "restful"
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        ],
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "services": {}
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    },
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "servicemap": {
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "epoch": 1,
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "modified": "2025-12-07T19:48:35.445282+0000",
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:        "services": {}
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    },
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]:    "progress_events": {}
Dec  7 14:48:47 np0005549633 compassionate_moore[74980]: }
Dec  7 14:48:47 np0005549633 systemd[1]: libpod-1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7.scope: Deactivated successfully.
Dec  7 14:48:47 np0005549633 podman[74964]: 2025-12-07 19:48:47.569096353 +0000 UTC m=+0.665506697 container died 1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7 (image=quay.io/ceph/ceph:v19, name=compassionate_moore, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:47 np0005549633 systemd[1]: var-lib-containers-storage-overlay-fefe9482e075f8c2680da9907e470cee4911f5305e2e16d4ad34637f33c010cd-merged.mount: Deactivated successfully.
Dec  7 14:48:47 np0005549633 podman[74964]: 2025-12-07 19:48:47.613371337 +0000 UTC m=+0.709781681 container remove 1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7 (image=quay.io/ceph/ceph:v19, name=compassionate_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:47 np0005549633 systemd[1]: libpod-conmon-1b671d9dcee3ad3c3923a8ecbc55ce1898c367fb8536c328ecb46a39a9d369e7.scope: Deactivated successfully.
Dec  7 14:48:47 np0005549633 podman[75018]: 2025-12-07 19:48:47.681652467 +0000 UTC m=+0.045922452 container create 1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9 (image=quay.io/ceph/ceph:v19, name=sharp_bohr, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 14:48:47 np0005549633 systemd[1]: Started libpod-conmon-1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9.scope.
Dec  7 14:48:47 np0005549633 podman[75018]: 2025-12-07 19:48:47.659103263 +0000 UTC m=+0.023373278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:47 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8aaac8c3d7f8d622ea0909808d8eeaff0a8b686a59e2dadad2365226cb0d7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8aaac8c3d7f8d622ea0909808d8eeaff0a8b686a59e2dadad2365226cb0d7f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8aaac8c3d7f8d622ea0909808d8eeaff0a8b686a59e2dadad2365226cb0d7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8aaac8c3d7f8d622ea0909808d8eeaff0a8b686a59e2dadad2365226cb0d7f/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:47 np0005549633 podman[75018]: 2025-12-07 19:48:47.778515007 +0000 UTC m=+0.142785012 container init 1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9 (image=quay.io/ceph/ceph:v19, name=sharp_bohr, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:47 np0005549633 podman[75018]: 2025-12-07 19:48:47.788833393 +0000 UTC m=+0.153103388 container start 1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9 (image=quay.io/ceph/ceph:v19, name=sharp_bohr, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:47 np0005549633 podman[75018]: 2025-12-07 19:48:47.792744152 +0000 UTC m=+0.157014217 container attach 1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9 (image=quay.io/ceph/ceph:v19, name=sharp_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 14:48:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  7 14:48:48 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1967981161' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 14:48:48 np0005549633 sharp_bohr[75034]: 
Dec  7 14:48:48 np0005549633 sharp_bohr[75034]: [global]
Dec  7 14:48:48 np0005549633 sharp_bohr[75034]: #011fsid = a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:48:48 np0005549633 sharp_bohr[75034]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  7 14:48:48 np0005549633 systemd[1]: libpod-1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9.scope: Deactivated successfully.
Dec  7 14:48:48 np0005549633 conmon[75034]: conmon 1360fc832e3cc591efde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9.scope/container/memory.events
Dec  7 14:48:48 np0005549633 podman[75018]: 2025-12-07 19:48:48.178455464 +0000 UTC m=+0.542725459 container died 1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9 (image=quay.io/ceph/ceph:v19, name=sharp_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 14:48:48 np0005549633 systemd[1]: var-lib-containers-storage-overlay-ec8aaac8c3d7f8d622ea0909808d8eeaff0a8b686a59e2dadad2365226cb0d7f-merged.mount: Deactivated successfully.
Dec  7 14:48:48 np0005549633 podman[75018]: 2025-12-07 19:48:48.221707871 +0000 UTC m=+0.585977856 container remove 1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9 (image=quay.io/ceph/ceph:v19, name=sharp_bohr, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:48 np0005549633 systemd[1]: libpod-conmon-1360fc832e3cc591efde9b05bccb44def3a5a268e1cf3819be9b3096a8f49de9.scope: Deactivated successfully.
Dec  7 14:48:48 np0005549633 podman[75071]: 2025-12-07 19:48:48.313864942 +0000 UTC m=+0.058507070 container create 94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c (image=quay.io/ceph/ceph:v19, name=objective_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 14:48:48 np0005549633 systemd[1]: Started libpod-conmon-94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c.scope.
Dec  7 14:48:48 np0005549633 podman[75071]: 2025-12-07 19:48:48.294470835 +0000 UTC m=+0.039112973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:48 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d690b17ba88bc5c329947c8f3329cd42ea32b0fee6a3f81ab7f222fd7eb414/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d690b17ba88bc5c329947c8f3329cd42ea32b0fee6a3f81ab7f222fd7eb414/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0d690b17ba88bc5c329947c8f3329cd42ea32b0fee6a3f81ab7f222fd7eb414/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:48 np0005549633 podman[75071]: 2025-12-07 19:48:48.418920999 +0000 UTC m=+0.163563137 container init 94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c (image=quay.io/ceph/ceph:v19, name=objective_agnesi, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 14:48:48 np0005549633 podman[75071]: 2025-12-07 19:48:48.429153072 +0000 UTC m=+0.173795190 container start 94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c (image=quay.io/ceph/ceph:v19, name=objective_agnesi, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:48 np0005549633 podman[75071]: 2025-12-07 19:48:48.433781359 +0000 UTC m=+0.178423477 container attach 94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c (image=quay.io/ceph/ceph:v19, name=objective_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 14:48:48 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1967981161' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 14:48:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec  7 14:48:48 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3496429863' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:48:49 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3496429863' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  7 14:48:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3496429863' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  7 14:48:49 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.dyzcyj(active, since 4s)
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  1: '-n'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  2: 'mgr.compute-0.dyzcyj'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  3: '-f'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  4: '--setuser'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  5: 'ceph'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  6: '--setgroup'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  7: 'ceph'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  8: '--default-log-to-file=false'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  9: '--default-log-to-journald=true'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr respawn  exe_path /proc/self/exe
Dec  7 14:48:49 np0005549633 systemd[1]: libpod-94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c.scope: Deactivated successfully.
Dec  7 14:48:49 np0005549633 podman[75071]: 2025-12-07 19:48:49.601526582 +0000 UTC m=+1.346168700 container died 94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c (image=quay.io/ceph/ceph:v19, name=objective_agnesi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 14:48:49 np0005549633 systemd[1]: var-lib-containers-storage-overlay-a0d690b17ba88bc5c329947c8f3329cd42ea32b0fee6a3f81ab7f222fd7eb414-merged.mount: Deactivated successfully.
Dec  7 14:48:49 np0005549633 podman[75071]: 2025-12-07 19:48:49.63683546 +0000 UTC m=+1.381477608 container remove 94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c (image=quay.io/ceph/ceph:v19, name=objective_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2)
Dec  7 14:48:49 np0005549633 systemd[1]: libpod-conmon-94f5284676f7800eb93c67a2e698b0a38bd907d29dd4955193ea143f664a3a8c.scope: Deactivated successfully.
Dec  7 14:48:49 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setuser ceph since I am not root
Dec  7 14:48:49 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setgroup ceph since I am not root
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: pidfile_write: ignore empty --pid-file
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'alerts'
Dec  7 14:48:49 np0005549633 podman[75126]: 2025-12-07 19:48:49.714746406 +0000 UTC m=+0.055386144 container create ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6 (image=quay.io/ceph/ceph:v19, name=goofy_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 14:48:49 np0005549633 systemd[1]: Started libpod-conmon-ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6.scope.
Dec  7 14:48:49 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:49 np0005549633 podman[75126]: 2025-12-07 19:48:49.690645199 +0000 UTC m=+0.031284987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:49 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa784fe484e90201fa957c0a51d859e855b7e9ff53553febd55abab6bd13482d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:49 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa784fe484e90201fa957c0a51d859e855b7e9ff53553febd55abab6bd13482d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:49 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa784fe484e90201fa957c0a51d859e855b7e9ff53553febd55abab6bd13482d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:49 np0005549633 podman[75126]: 2025-12-07 19:48:49.812444769 +0000 UTC m=+0.153084537 container init ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6 (image=quay.io/ceph/ceph:v19, name=goofy_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 14:48:49 np0005549633 podman[75126]: 2025-12-07 19:48:49.817630613 +0000 UTC m=+0.158270311 container start ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6 (image=quay.io/ceph/ceph:v19, name=goofy_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Dec  7 14:48:49 np0005549633 podman[75126]: 2025-12-07 19:48:49.821721456 +0000 UTC m=+0.162361194 container attach ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6 (image=quay.io/ceph/ceph:v19, name=goofy_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'balancer'
Dec  7 14:48:49 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:49.824+0000 7f95e30cf140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:48:49 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'cephadm'
Dec  7 14:48:49 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:49.910+0000 7f95e30cf140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:48:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  7 14:48:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3749146466' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  7 14:48:50 np0005549633 goofy_aryabhata[75162]: {
Dec  7 14:48:50 np0005549633 goofy_aryabhata[75162]:    "epoch": 5,
Dec  7 14:48:50 np0005549633 goofy_aryabhata[75162]:    "available": true,
Dec  7 14:48:50 np0005549633 goofy_aryabhata[75162]:    "active_name": "compute-0.dyzcyj",
Dec  7 14:48:50 np0005549633 goofy_aryabhata[75162]:    "num_standby": 0
Dec  7 14:48:50 np0005549633 goofy_aryabhata[75162]: }
Dec  7 14:48:50 np0005549633 systemd[1]: libpod-ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6.scope: Deactivated successfully.
Dec  7 14:48:50 np0005549633 podman[75126]: 2025-12-07 19:48:50.264371115 +0000 UTC m=+0.605010843 container died ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6 (image=quay.io/ceph/ceph:v19, name=goofy_aryabhata, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:50 np0005549633 systemd[1]: var-lib-containers-storage-overlay-fa784fe484e90201fa957c0a51d859e855b7e9ff53553febd55abab6bd13482d-merged.mount: Deactivated successfully.
Dec  7 14:48:50 np0005549633 podman[75126]: 2025-12-07 19:48:50.314282655 +0000 UTC m=+0.654922393 container remove ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6 (image=quay.io/ceph/ceph:v19, name=goofy_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 14:48:50 np0005549633 systemd[1]: libpod-conmon-ff89bb8ff4650ca17bb6d3ad16a9b6bffe312b3612aabb9ebaa35c98160ee7f6.scope: Deactivated successfully.
Dec  7 14:48:50 np0005549633 podman[75209]: 2025-12-07 19:48:50.386483544 +0000 UTC m=+0.048068831 container create 6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af (image=quay.io/ceph/ceph:v19, name=suspicious_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:48:50 np0005549633 systemd[1]: Started libpod-conmon-6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af.scope.
Dec  7 14:48:50 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:50 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d59744a10022878988638c2cd62ece2c685ce35a00e113329a68c9603d28a18/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:50 np0005549633 podman[75209]: 2025-12-07 19:48:50.362538671 +0000 UTC m=+0.024123998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:50 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d59744a10022878988638c2cd62ece2c685ce35a00e113329a68c9603d28a18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:50 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d59744a10022878988638c2cd62ece2c685ce35a00e113329a68c9603d28a18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:50 np0005549633 podman[75209]: 2025-12-07 19:48:50.473492881 +0000 UTC m=+0.135078198 container init 6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af (image=quay.io/ceph/ceph:v19, name=suspicious_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:50 np0005549633 podman[75209]: 2025-12-07 19:48:50.478791268 +0000 UTC m=+0.140376555 container start 6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af (image=quay.io/ceph/ceph:v19, name=suspicious_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:48:50 np0005549633 podman[75209]: 2025-12-07 19:48:50.482677706 +0000 UTC m=+0.144262973 container attach 6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af (image=quay.io/ceph/ceph:v19, name=suspicious_edison, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:50 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3496429863' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  7 14:48:50 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'crash'
Dec  7 14:48:50 np0005549633 ceph-mgr[74680]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:48:50 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'dashboard'
Dec  7 14:48:50 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:50.833+0000 7f95e30cf140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'devicehealth'
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 14:48:51 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:51.464+0000 7f95e30cf140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 14:48:51 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 14:48:51 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  from numpy import show_config as show_numpy_config
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:51.641+0000 7f95e30cf140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'influx'
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:51.714+0000 7f95e30cf140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'insights'
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'iostat'
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:48:51 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'k8sevents'
Dec  7 14:48:51 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:51.868+0000 7f95e30cf140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:48:52 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'localpool'
Dec  7 14:48:52 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 14:48:52 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mirroring'
Dec  7 14:48:52 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'nfs'
Dec  7 14:48:52 np0005549633 ceph-mgr[74680]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:48:52 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'orchestrator'
Dec  7 14:48:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:52.885+0000 7f95e30cf140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 14:48:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:53.098+0000 7f95e30cf140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_support'
Dec  7 14:48:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:53.185+0000 7f95e30cf140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 14:48:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:53.273+0000 7f95e30cf140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'progress'
Dec  7 14:48:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:53.346+0000 7f95e30cf140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'prometheus'
Dec  7 14:48:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:53.413+0000 7f95e30cf140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rbd_support'
Dec  7 14:48:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:53.757+0000 7f95e30cf140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:48:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'restful'
Dec  7 14:48:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:53.851+0000 7f95e30cf140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rgw'
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rook'
Dec  7 14:48:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:54.267+0000 7f95e30cf140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'selftest'
Dec  7 14:48:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:54.809+0000 7f95e30cf140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'snap_schedule'
Dec  7 14:48:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:54.877+0000 7f95e30cf140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:48:54 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'stats'
Dec  7 14:48:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:54.960+0000 7f95e30cf140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'status'
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telegraf'
Dec  7 14:48:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:55.114+0000 7f95e30cf140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telemetry'
Dec  7 14:48:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:55.178+0000 7f95e30cf140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 14:48:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:55.328+0000 7f95e30cf140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'volumes'
Dec  7 14:48:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:55.539+0000 7f95e30cf140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'zabbix'
Dec  7 14:48:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:55.835+0000 7f95e30cf140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:48:55.909+0000 7f95e30cf140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:48:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:48:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  7 14:48:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:48:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dyzcyj
Dec  7 14:48:55 np0005549633 ceph-mgr[74680]: ms_deliver_dispatch: unhandled message 0x561806fe2d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 14:48:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  7 14:48:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  7 14:48:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map Activating!
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map I am now activating
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.dyzcyj(active, starting, since 0.0910823s)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: balancer
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Starting
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Manager daemon compute-0.dyzcyj is now available
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:48:56
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] do_upmap
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] No pools available
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: Activating manager daemon compute-0.dyzcyj
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: Manager daemon compute-0.dyzcyj is now available
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: cephadm
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: crash
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: devicehealth
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Starting
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: iostat
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: nfs
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: orchestrator
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: pg_autoscaler
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: progress
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [progress INFO root] Loading...
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [progress INFO root] No stored events to load
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded [] historic events
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] recovery thread starting
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] starting setup
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: rbd_support
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: restful
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [restful WARNING root] server not running: no certificate configured
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: status
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: telemetry
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] PerfHandler: starting
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TaskHandler: starting
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"} v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] setup complete
Dec  7 14:48:56 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: volumes
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec  7 14:48:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.dyzcyj(active, since 1.10335s)
Dec  7 14:48:57 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  7 14:48:57 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  7 14:48:57 np0005549633 suspicious_edison[75226]: {
Dec  7 14:48:57 np0005549633 suspicious_edison[75226]:    "mgrmap_epoch": 7,
Dec  7 14:48:57 np0005549633 suspicious_edison[75226]:    "initialized": true
Dec  7 14:48:57 np0005549633 suspicious_edison[75226]: }
Dec  7 14:48:57 np0005549633 systemd[1]: libpod-6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af.scope: Deactivated successfully.
Dec  7 14:48:57 np0005549633 podman[75209]: 2025-12-07 19:48:57.062661443 +0000 UTC m=+6.724246760 container died 6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af (image=quay.io/ceph/ceph:v19, name=suspicious_edison, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: Found migration_current of "None". Setting to last migration.
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:57 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8d59744a10022878988638c2cd62ece2c685ce35a00e113329a68c9603d28a18-merged.mount: Deactivated successfully.
Dec  7 14:48:57 np0005549633 podman[75209]: 2025-12-07 19:48:57.112335228 +0000 UTC m=+6.773920535 container remove 6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af (image=quay.io/ceph/ceph:v19, name=suspicious_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 14:48:57 np0005549633 systemd[1]: libpod-conmon-6aaa9b1bdd17c1c8b6747ded98a3bc984db3aae928df09b92119e258922406af.scope: Deactivated successfully.
Dec  7 14:48:57 np0005549633 podman[75376]: 2025-12-07 19:48:57.207785729 +0000 UTC m=+0.063666503 container create 9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2 (image=quay.io/ceph/ceph:v19, name=boring_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 14:48:57 np0005549633 systemd[1]: Started libpod-conmon-9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2.scope.
Dec  7 14:48:57 np0005549633 podman[75376]: 2025-12-07 19:48:57.182674134 +0000 UTC m=+0.038554988 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:57 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d28f8291581cad19ef19e3cde7a1d7d63dbae823694e7a91a909c01feb1bb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d28f8291581cad19ef19e3cde7a1d7d63dbae823694e7a91a909c01feb1bb4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d28f8291581cad19ef19e3cde7a1d7d63dbae823694e7a91a909c01feb1bb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:57 np0005549633 podman[75376]: 2025-12-07 19:48:57.331340178 +0000 UTC m=+0.187220992 container init 9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2 (image=quay.io/ceph/ceph:v19, name=boring_einstein, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 14:48:57 np0005549633 podman[75376]: 2025-12-07 19:48:57.337606042 +0000 UTC m=+0.193486816 container start 9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2 (image=quay.io/ceph/ceph:v19, name=boring_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 14:48:57 np0005549633 podman[75376]: 2025-12-07 19:48:57.341433977 +0000 UTC m=+0.197314781 container attach 9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2 (image=quay.io/ceph/ceph:v19, name=boring_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019926848 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:48:57 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 14:48:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 14:48:57 np0005549633 systemd[1]: libpod-9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2.scope: Deactivated successfully.
Dec  7 14:48:57 np0005549633 podman[75376]: 2025-12-07 19:48:57.770484 +0000 UTC m=+0.626364804 container died 9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2 (image=quay.io/ceph/ceph:v19, name=boring_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:48:57 np0005549633 systemd[1]: var-lib-containers-storage-overlay-47d28f8291581cad19ef19e3cde7a1d7d63dbae823694e7a91a909c01feb1bb4-merged.mount: Deactivated successfully.
Dec  7 14:48:57 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:48:57] ENGINE Bus STARTING
Dec  7 14:48:57 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:48:57] ENGINE Bus STARTING
Dec  7 14:48:57 np0005549633 podman[75376]: 2025-12-07 19:48:57.828713871 +0000 UTC m=+0.684594675 container remove 9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2 (image=quay.io/ceph/ceph:v19, name=boring_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:48:57 np0005549633 systemd[1]: libpod-conmon-9f44430dc2f2ede5bf599f08d79b5a31d68348758271760ab248b179a2f736f2.scope: Deactivated successfully.
Dec  7 14:48:57 np0005549633 podman[75442]: 2025-12-07 19:48:57.904064376 +0000 UTC m=+0.049654524 container create 123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2 (image=quay.io/ceph/ceph:v19, name=naughty_chaplygin, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 14:48:57 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:48:57] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:48:57 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:48:57] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:48:57 np0005549633 systemd[1]: Started libpod-conmon-123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2.scope.
Dec  7 14:48:57 np0005549633 podman[75442]: 2025-12-07 19:48:57.883791796 +0000 UTC m=+0.029381974 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:57 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55223820f14840be2dfbc2e0a699864ea7959a5754a7ee8b589b36806984e84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55223820f14840be2dfbc2e0a699864ea7959a5754a7ee8b589b36806984e84/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e55223820f14840be2dfbc2e0a699864ea7959a5754a7ee8b589b36806984e84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:48:58 np0005549633 podman[75442]: 2025-12-07 19:48:58.014688587 +0000 UTC m=+0.160278755 container init 123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2 (image=quay.io/ceph/ceph:v19, name=naughty_chaplygin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:58 np0005549633 podman[75442]: 2025-12-07 19:48:58.025146477 +0000 UTC m=+0.170736645 container start 123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2 (image=quay.io/ceph/ceph:v19, name=naughty_chaplygin, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:48:58 np0005549633 podman[75442]: 2025-12-07 19:48:58.030129705 +0000 UTC m=+0.175719873 container attach 123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2 (image=quay.io/ceph/ceph:v19, name=naughty_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:48:58] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:48:58] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:48:58] ENGINE Bus STARTED
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:48:58] ENGINE Bus STARTED
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:48:58] ENGINE Client ('192.168.122.100', 36058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:48:58] ENGINE Client ('192.168.122.100', 36058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:48:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 14:48:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 14:48:58 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:48:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec  7 14:48:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Set ssh ssh_user
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  7 14:48:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec  7 14:48:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Set ssh ssh_config
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  7 14:48:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  7 14:48:58 np0005549633 naughty_chaplygin[75459]: ssh user set to ceph-admin. sudo will be used
Dec  7 14:48:58 np0005549633 systemd[1]: libpod-123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2.scope: Deactivated successfully.
Dec  7 14:48:58 np0005549633 podman[75442]: 2025-12-07 19:48:58.436489829 +0000 UTC m=+0.582079987 container died 123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2 (image=quay.io/ceph/ceph:v19, name=naughty_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:48:58 np0005549633 systemd[1]: var-lib-containers-storage-overlay-e55223820f14840be2dfbc2e0a699864ea7959a5754a7ee8b589b36806984e84-merged.mount: Deactivated successfully.
Dec  7 14:48:58 np0005549633 podman[75442]: 2025-12-07 19:48:58.480129037 +0000 UTC m=+0.625719225 container remove 123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2 (image=quay.io/ceph/ceph:v19, name=naughty_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 14:48:58 np0005549633 systemd[1]: libpod-conmon-123508e0c6cab8144fec642a86b19c7de8028dff11fcc4c8fa8b123d31fbeda2.scope: Deactivated successfully.
Dec  7 14:48:58 np0005549633 podman[75510]: 2025-12-07 19:48:58.57636451 +0000 UTC m=+0.063576740 container create 5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7 (image=quay.io/ceph/ceph:v19, name=pensive_mendel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 14:48:58 np0005549633 podman[75510]: 2025-12-07 19:48:58.555698138 +0000 UTC m=+0.042910378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:48:58 np0005549633 systemd[1]: Started libpod-conmon-5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7.scope.
Dec  7 14:48:58 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:48:58 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49875482d59eefd78ae67d6344cfae50a48f8cc2bd3f8235b016aad4f9e71494/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:58 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49875482d59eefd78ae67d6344cfae50a48f8cc2bd3f8235b016aad4f9e71494/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:58 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49875482d59eefd78ae67d6344cfae50a48f8cc2bd3f8235b016aad4f9e71494/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:58 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49875482d59eefd78ae67d6344cfae50a48f8cc2bd3f8235b016aad4f9e71494/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:58 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49875482d59eefd78ae67d6344cfae50a48f8cc2bd3f8235b016aad4f9e71494/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:48:59 np0005549633 podman[75510]: 2025-12-07 19:48:59.023477172 +0000 UTC m=+0.510689492 container init 5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7 (image=quay.io/ceph/ceph:v19, name=pensive_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.dyzcyj(active, since 3s)
Dec  7 14:48:59 np0005549633 podman[75510]: 2025-12-07 19:48:59.034790205 +0000 UTC m=+0.522002465 container start 5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7 (image=quay.io/ceph/ceph:v19, name=pensive_mendel, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 14:48:59 np0005549633 podman[75510]: 2025-12-07 19:48:59.151159615 +0000 UTC m=+0.638371935 container attach 5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7 (image=quay.io/ceph/ceph:v19, name=pensive_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:48:57] ENGINE Bus STARTING
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:48:57] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:48:58] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:48:58] ENGINE Bus STARTED
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:48:58] ENGINE Client ('192.168.122.100', 36058) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: Set ssh ssh_user
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: Set ssh ssh_config
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: ssh user set to ceph-admin. sudo will be used
Dec  7 14:48:59 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:48:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec  7 14:49:00 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:00 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  7 14:49:00 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  7 14:49:00 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Set ssh private key
Dec  7 14:49:00 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  7 14:49:00 np0005549633 systemd[1]: libpod-5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7.scope: Deactivated successfully.
Dec  7 14:49:00 np0005549633 podman[75510]: 2025-12-07 19:49:00.863073717 +0000 UTC m=+2.350285977 container died 5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7 (image=quay.io/ceph/ceph:v19, name=pensive_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:01 np0005549633 systemd[1]: var-lib-containers-storage-overlay-49875482d59eefd78ae67d6344cfae50a48f8cc2bd3f8235b016aad4f9e71494-merged.mount: Deactivated successfully.
Dec  7 14:49:01 np0005549633 podman[75510]: 2025-12-07 19:49:01.212265569 +0000 UTC m=+2.699477789 container remove 5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7 (image=quay.io/ceph/ceph:v19, name=pensive_mendel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 14:49:01 np0005549633 systemd[1]: libpod-conmon-5cfd10193ee14b6cb0eec8734c9b1e46428fea2c79020a11445c0b930dd465d7.scope: Deactivated successfully.
Dec  7 14:49:01 np0005549633 podman[75571]: 2025-12-07 19:49:01.283309465 +0000 UTC m=+0.047040143 container create 420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf (image=quay.io/ceph/ceph:v19, name=clever_ellis, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:01 np0005549633 systemd[1]: Started libpod-conmon-420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf.scope.
Dec  7 14:49:01 np0005549633 podman[75571]: 2025-12-07 19:49:01.264253248 +0000 UTC m=+0.027983976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:01 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8dede58626e36641705710dec4d9686ffb05d7ce366e4902b475c02d3595aa/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8dede58626e36641705710dec4d9686ffb05d7ce366e4902b475c02d3595aa/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8dede58626e36641705710dec4d9686ffb05d7ce366e4902b475c02d3595aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8dede58626e36641705710dec4d9686ffb05d7ce366e4902b475c02d3595aa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b8dede58626e36641705710dec4d9686ffb05d7ce366e4902b475c02d3595aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:01 np0005549633 podman[75571]: 2025-12-07 19:49:01.396300351 +0000 UTC m=+0.160030999 container init 420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf (image=quay.io/ceph/ceph:v19, name=clever_ellis, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:01 np0005549633 podman[75571]: 2025-12-07 19:49:01.409898198 +0000 UTC m=+0.173628866 container start 420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf (image=quay.io/ceph/ceph:v19, name=clever_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 14:49:01 np0005549633 podman[75571]: 2025-12-07 19:49:01.414607418 +0000 UTC m=+0.178338076 container attach 420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf (image=quay.io/ceph/ceph:v19, name=clever_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:01 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec  7 14:49:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:01 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  7 14:49:01 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  7 14:49:01 np0005549633 systemd[1]: libpod-420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf.scope: Deactivated successfully.
Dec  7 14:49:01 np0005549633 podman[75571]: 2025-12-07 19:49:01.827229146 +0000 UTC m=+0.590959784 container died 420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf (image=quay.io/ceph/ceph:v19, name=clever_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:01 np0005549633 systemd[1]: var-lib-containers-storage-overlay-3b8dede58626e36641705710dec4d9686ffb05d7ce366e4902b475c02d3595aa-merged.mount: Deactivated successfully.
Dec  7 14:49:01 np0005549633 podman[75571]: 2025-12-07 19:49:01.872768157 +0000 UTC m=+0.636498825 container remove 420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf (image=quay.io/ceph/ceph:v19, name=clever_ellis, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:01 np0005549633 systemd[1]: libpod-conmon-420d98cd2cb53d1dfc5cb6f2dabe460ae1989f8d03be32af6ebc394377395caf.scope: Deactivated successfully.
Dec  7 14:49:01 np0005549633 podman[75624]: 2025-12-07 19:49:01.990130684 +0000 UTC m=+0.083327307 container create 03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7 (image=quay.io/ceph/ceph:v19, name=tender_sutherland, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 14:49:02 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:02 np0005549633 systemd[1]: Started libpod-conmon-03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7.scope.
Dec  7 14:49:02 np0005549633 podman[75624]: 2025-12-07 19:49:01.949218281 +0000 UTC m=+0.042414914 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:02 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd46a2a579895a4fb050e3e912421fdafe930978f8b32004973e06993c242ab2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd46a2a579895a4fb050e3e912421fdafe930978f8b32004973e06993c242ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd46a2a579895a4fb050e3e912421fdafe930978f8b32004973e06993c242ab2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:02 np0005549633 podman[75624]: 2025-12-07 19:49:02.108393436 +0000 UTC m=+0.201590129 container init 03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7 (image=quay.io/ceph/ceph:v19, name=tender_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:02 np0005549633 podman[75624]: 2025-12-07 19:49:02.121487528 +0000 UTC m=+0.214684111 container start 03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7 (image=quay.io/ceph/ceph:v19, name=tender_sutherland, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:02 np0005549633 podman[75624]: 2025-12-07 19:49:02.125983694 +0000 UTC m=+0.219180327 container attach 03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7 (image=quay.io/ceph/ceph:v19, name=tender_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Dec  7 14:49:02 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:02 np0005549633 ceph-mon[74384]: Set ssh ssh_identity_key
Dec  7 14:49:02 np0005549633 ceph-mon[74384]: Set ssh private key
Dec  7 14:49:02 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:02 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:02 np0005549633 tender_sutherland[75641]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqX4MZZgbEvv5iPdSvvxn6dFg572g03iBoBXmv1D0gxL7AZxDu0hTva+W6dUvN6tcVK9TRyrMME0Lnt2BN/GH7PIQDBZkEQR4xZ7CHVsEcRp1Mk14Ei+KIikfBEolo1ZHgEHCPfABJ2KQvbSDRd0J5bPrv1WWzWsoi3VzCRW9ZXZZXPlQr4/CImeP+9HF/WqlAKdSEAZn1tetR4fQqm4oLyxpAuNSrl0chvHxPdpoOpUaXrpPArcg6v4zi6JjkdkcQ9fBvAKmnPLa8/TCXpuZ/1c/ZXJDTgfr6nZ0ss56gqmg4pSje+NMLJ0irwtd4dWIA0o1btDY/azVvJPdqlRtTj+zuy62e+AcRmyKxlpZlOW5dn0qLZIoG8oCCjsnZsooh163BKr24vVlEWzQ2xfQwPtpXLPPpWkCKyJZo6VKV/TjcQR1yOmEP89ssLhdLaKRzfNDya4HO0kzEyMirisWgsvfDm6crQAQtJLGVNhWv9SfHBE4IVqTOHDzVxThHZVM= zuul@controller
Dec  7 14:49:02 np0005549633 systemd[1]: libpod-03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7.scope: Deactivated successfully.
Dec  7 14:49:02 np0005549633 podman[75624]: 2025-12-07 19:49:02.535340291 +0000 UTC m=+0.628536874 container died 03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7 (image=quay.io/ceph/ceph:v19, name=tender_sutherland, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 14:49:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053109 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:02 np0005549633 systemd[1]: var-lib-containers-storage-overlay-bd46a2a579895a4fb050e3e912421fdafe930978f8b32004973e06993c242ab2-merged.mount: Deactivated successfully.
Dec  7 14:49:02 np0005549633 podman[75624]: 2025-12-07 19:49:02.910922863 +0000 UTC m=+1.004119446 container remove 03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7 (image=quay.io/ceph/ceph:v19, name=tender_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 14:49:02 np0005549633 systemd[1]: libpod-conmon-03628f8c82b1cd856dcb0874fc826ba38b0e4fb426ab9bf44017376cc5ca37b7.scope: Deactivated successfully.
Dec  7 14:49:02 np0005549633 podman[75681]: 2025-12-07 19:49:02.998459476 +0000 UTC m=+0.057985296 container create e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020 (image=quay.io/ceph/ceph:v19, name=modest_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:03 np0005549633 systemd[1]: Started libpod-conmon-e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020.scope.
Dec  7 14:49:03 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:03 np0005549633 podman[75681]: 2025-12-07 19:49:02.966032119 +0000 UTC m=+0.025557989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:03 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fc94eb2bcee855cdabb49fb96e2ef6442ea354cdd87a8e82dc194461cce791/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:03 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fc94eb2bcee855cdabb49fb96e2ef6442ea354cdd87a8e82dc194461cce791/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:03 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5fc94eb2bcee855cdabb49fb96e2ef6442ea354cdd87a8e82dc194461cce791/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:03 np0005549633 ceph-mon[74384]: Set ssh ssh_identity_pub
Dec  7 14:49:03 np0005549633 podman[75681]: 2025-12-07 19:49:03.082245805 +0000 UTC m=+0.141771675 container init e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020 (image=quay.io/ceph/ceph:v19, name=modest_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:03 np0005549633 podman[75681]: 2025-12-07 19:49:03.225465628 +0000 UTC m=+0.284991458 container start e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020 (image=quay.io/ceph/ceph:v19, name=modest_shirley, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:03 np0005549633 podman[75681]: 2025-12-07 19:49:03.230540887 +0000 UTC m=+0.290066707 container attach e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020 (image=quay.io/ceph/ceph:v19, name=modest_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:49:03 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:03 np0005549633 systemd[1]: Created slice User Slice of UID 42477.
Dec  7 14:49:03 np0005549633 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  7 14:49:03 np0005549633 systemd-logind[797]: New session 21 of user ceph-admin.
Dec  7 14:49:03 np0005549633 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  7 14:49:03 np0005549633 systemd[1]: Starting User Manager for UID 42477...
Dec  7 14:49:04 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:04 np0005549633 systemd-logind[797]: New session 23 of user ceph-admin.
Dec  7 14:49:04 np0005549633 systemd[75727]: Queued start job for default target Main User Target.
Dec  7 14:49:04 np0005549633 systemd[75727]: Created slice User Application Slice.
Dec  7 14:49:04 np0005549633 systemd[75727]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 14:49:04 np0005549633 systemd[75727]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 14:49:04 np0005549633 systemd[75727]: Reached target Paths.
Dec  7 14:49:04 np0005549633 systemd[75727]: Reached target Timers.
Dec  7 14:49:04 np0005549633 systemd[75727]: Starting D-Bus User Message Bus Socket...
Dec  7 14:49:04 np0005549633 systemd[75727]: Starting Create User's Volatile Files and Directories...
Dec  7 14:49:04 np0005549633 systemd[75727]: Listening on D-Bus User Message Bus Socket.
Dec  7 14:49:04 np0005549633 systemd[75727]: Reached target Sockets.
Dec  7 14:49:04 np0005549633 systemd[75727]: Finished Create User's Volatile Files and Directories.
Dec  7 14:49:04 np0005549633 systemd[75727]: Reached target Basic System.
Dec  7 14:49:04 np0005549633 systemd[1]: Started User Manager for UID 42477.
Dec  7 14:49:04 np0005549633 systemd[75727]: Reached target Main User Target.
Dec  7 14:49:04 np0005549633 systemd[75727]: Startup finished in 199ms.
Dec  7 14:49:04 np0005549633 systemd[1]: Started Session 21 of User ceph-admin.
Dec  7 14:49:04 np0005549633 systemd[1]: Started Session 23 of User ceph-admin.
Dec  7 14:49:04 np0005549633 systemd-logind[797]: New session 24 of user ceph-admin.
Dec  7 14:49:04 np0005549633 systemd[1]: Started Session 24 of User ceph-admin.
Dec  7 14:49:05 np0005549633 systemd-logind[797]: New session 25 of user ceph-admin.
Dec  7 14:49:05 np0005549633 systemd[1]: Started Session 25 of User ceph-admin.
Dec  7 14:49:05 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  7 14:49:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  7 14:49:05 np0005549633 systemd-logind[797]: New session 26 of user ceph-admin.
Dec  7 14:49:05 np0005549633 systemd[1]: Started Session 26 of User ceph-admin.
Dec  7 14:49:05 np0005549633 ceph-mon[74384]: Deploying cephadm binary to compute-0
Dec  7 14:49:05 np0005549633 systemd-logind[797]: New session 27 of user ceph-admin.
Dec  7 14:49:05 np0005549633 systemd[1]: Started Session 27 of User ceph-admin.
Dec  7 14:49:06 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:06 np0005549633 systemd-logind[797]: New session 28 of user ceph-admin.
Dec  7 14:49:06 np0005549633 systemd[1]: Started Session 28 of User ceph-admin.
Dec  7 14:49:06 np0005549633 systemd-logind[797]: New session 29 of user ceph-admin.
Dec  7 14:49:06 np0005549633 systemd[1]: Started Session 29 of User ceph-admin.
Dec  7 14:49:07 np0005549633 systemd-logind[797]: New session 30 of user ceph-admin.
Dec  7 14:49:07 np0005549633 systemd[1]: Started Session 30 of User ceph-admin.
Dec  7 14:49:07 np0005549633 systemd-logind[797]: New session 31 of user ceph-admin.
Dec  7 14:49:07 np0005549633 systemd[1]: Started Session 31 of User ceph-admin.
Dec  7 14:49:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:08 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:08 np0005549633 systemd-logind[797]: New session 32 of user ceph-admin.
Dec  7 14:49:08 np0005549633 systemd[1]: Started Session 32 of User ceph-admin.
Dec  7 14:49:09 np0005549633 systemd-logind[797]: New session 33 of user ceph-admin.
Dec  7 14:49:09 np0005549633 systemd[1]: Started Session 33 of User ceph-admin.
Dec  7 14:49:09 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:09 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:09 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Added host compute-0
Dec  7 14:49:09 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  7 14:49:09 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 14:49:09 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 14:49:09 np0005549633 modest_shirley[75697]: Added host 'compute-0' with addr '192.168.122.100'
Dec  7 14:49:09 np0005549633 systemd[1]: libpod-e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020.scope: Deactivated successfully.
Dec  7 14:49:10 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:10 np0005549633 podman[76089]: 2025-12-07 19:49:10.041885547 +0000 UTC m=+0.051602889 container died e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020 (image=quay.io/ceph/ceph:v19, name=modest_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:10 np0005549633 systemd[1]: var-lib-containers-storage-overlay-f5fc94eb2bcee855cdabb49fb96e2ef6442ea354cdd87a8e82dc194461cce791-merged.mount: Deactivated successfully.
Dec  7 14:49:10 np0005549633 podman[76089]: 2025-12-07 19:49:10.098966567 +0000 UTC m=+0.108683879 container remove e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020 (image=quay.io/ceph/ceph:v19, name=modest_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:10 np0005549633 systemd[1]: libpod-conmon-e309ddb57c8900c8a8649888e9611885ea2db82c132eb2503929f861abd12020.scope: Deactivated successfully.
Dec  7 14:49:10 np0005549633 podman[76144]: 2025-12-07 19:49:10.228060619 +0000 UTC m=+0.082936696 container create 98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae (image=quay.io/ceph/ceph:v19, name=nifty_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:10 np0005549633 systemd[1]: Started libpod-conmon-98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae.scope.
Dec  7 14:49:10 np0005549633 podman[76144]: 2025-12-07 19:49:10.197340769 +0000 UTC m=+0.052216906 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:10 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070c314eeeeeec623416844f61e2a78e932d9331a738e307b4d7bd7ccff929af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070c314eeeeeec623416844f61e2a78e932d9331a738e307b4d7bd7ccff929af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/070c314eeeeeec623416844f61e2a78e932d9331a738e307b4d7bd7ccff929af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:10 np0005549633 podman[76144]: 2025-12-07 19:49:10.335978105 +0000 UTC m=+0.190854222 container init 98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae (image=quay.io/ceph/ceph:v19, name=nifty_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 14:49:10 np0005549633 podman[76144]: 2025-12-07 19:49:10.359329922 +0000 UTC m=+0.214205999 container start 98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae (image=quay.io/ceph/ceph:v19, name=nifty_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:10 np0005549633 podman[76144]: 2025-12-07 19:49:10.364369991 +0000 UTC m=+0.219246058 container attach 98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae (image=quay.io/ceph/ceph:v19, name=nifty_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 14:49:10 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:10 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  7 14:49:10 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  7 14:49:10 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 14:49:10 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:10 np0005549633 nifty_northcutt[76161]: Scheduled mon update...
Dec  7 14:49:10 np0005549633 systemd[1]: libpod-98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae.scope: Deactivated successfully.
Dec  7 14:49:10 np0005549633 podman[76144]: 2025-12-07 19:49:10.932319027 +0000 UTC m=+0.787195094 container died 98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae (image=quay.io/ceph/ceph:v19, name=nifty_northcutt, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:10 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:10 np0005549633 ceph-mon[74384]: Added host compute-0
Dec  7 14:49:10 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:10 np0005549633 systemd[1]: var-lib-containers-storage-overlay-070c314eeeeeec623416844f61e2a78e932d9331a738e307b4d7bd7ccff929af-merged.mount: Deactivated successfully.
Dec  7 14:49:10 np0005549633 podman[76144]: 2025-12-07 19:49:10.988949405 +0000 UTC m=+0.843825452 container remove 98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae (image=quay.io/ceph/ceph:v19, name=nifty_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 14:49:10 np0005549633 systemd[1]: libpod-conmon-98781e6c40921ed484914945ab597297d82175c20919e6e1f9bbf1e004b8a4ae.scope: Deactivated successfully.
Dec  7 14:49:11 np0005549633 podman[76177]: 2025-12-07 19:49:11.034981088 +0000 UTC m=+0.557723264 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:11 np0005549633 podman[76225]: 2025-12-07 19:49:11.081042602 +0000 UTC m=+0.066757197 container create 1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d (image=quay.io/ceph/ceph:v19, name=trusting_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:11 np0005549633 podman[76225]: 2025-12-07 19:49:11.053168161 +0000 UTC m=+0.038882846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:11 np0005549633 systemd[1]: Started libpod-conmon-1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d.scope.
Dec  7 14:49:11 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50332a823e1c701c5030a7b946c05bb821f325e8726dda64de45022907676e5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50332a823e1c701c5030a7b946c05bb821f325e8726dda64de45022907676e5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50332a823e1c701c5030a7b946c05bb821f325e8726dda64de45022907676e5b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:11 np0005549633 podman[76225]: 2025-12-07 19:49:11.220971035 +0000 UTC m=+0.206685620 container init 1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d (image=quay.io/ceph/ceph:v19, name=trusting_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 14:49:11 np0005549633 podman[76225]: 2025-12-07 19:49:11.235055004 +0000 UTC m=+0.220769629 container start 1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d (image=quay.io/ceph/ceph:v19, name=trusting_kirch, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 14:49:11 np0005549633 podman[76225]: 2025-12-07 19:49:11.239473096 +0000 UTC m=+0.225187681 container attach 1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d (image=quay.io/ceph/ceph:v19, name=trusting_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 14:49:11 np0005549633 podman[76254]: 2025-12-07 19:49:11.256925 +0000 UTC m=+0.064006223 container create fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980 (image=quay.io/ceph/ceph:v19, name=brave_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 14:49:11 np0005549633 systemd[1]: Started libpod-conmon-fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980.scope.
Dec  7 14:49:11 np0005549633 podman[76254]: 2025-12-07 19:49:11.233794999 +0000 UTC m=+0.040876272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:11 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:11 np0005549633 podman[76254]: 2025-12-07 19:49:11.381330672 +0000 UTC m=+0.188411915 container init fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980 (image=quay.io/ceph/ceph:v19, name=brave_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:11 np0005549633 podman[76254]: 2025-12-07 19:49:11.3920912 +0000 UTC m=+0.199172423 container start fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980 (image=quay.io/ceph/ceph:v19, name=brave_hoover, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 14:49:11 np0005549633 podman[76254]: 2025-12-07 19:49:11.395918086 +0000 UTC m=+0.202999309 container attach fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980 (image=quay.io/ceph/ceph:v19, name=brave_hoover, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:11 np0005549633 brave_hoover[76274]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  7 14:49:11 np0005549633 systemd[1]: libpod-fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980.scope: Deactivated successfully.
Dec  7 14:49:11 np0005549633 podman[76254]: 2025-12-07 19:49:11.516382569 +0000 UTC m=+0.323463862 container died fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980 (image=quay.io/ceph/ceph:v19, name=brave_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:11 np0005549633 systemd[1]: var-lib-containers-storage-overlay-fed20501a4e77e77ee146c76c66f77e61202de4ef4b1195b3a4eee31ad25cdeb-merged.mount: Deactivated successfully.
Dec  7 14:49:11 np0005549633 podman[76254]: 2025-12-07 19:49:11.595023135 +0000 UTC m=+0.402104368 container remove fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980 (image=quay.io/ceph/ceph:v19, name=brave_hoover, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:11 np0005549633 systemd[1]: libpod-conmon-fbce296af088391297b7557a903b46ff75d386fafc30d1cd73abec2056fe4980.scope: Deactivated successfully.
Dec  7 14:49:11 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec  7 14:49:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:11 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:11 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  7 14:49:11 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  7 14:49:11 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 14:49:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:11 np0005549633 trusting_kirch[76255]: Scheduled mgr update...
Dec  7 14:49:11 np0005549633 systemd[1]: libpod-1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d.scope: Deactivated successfully.
Dec  7 14:49:11 np0005549633 podman[76225]: 2025-12-07 19:49:11.764533286 +0000 UTC m=+0.750247881 container died 1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d (image=quay.io/ceph/ceph:v19, name=trusting_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:11 np0005549633 systemd[1]: var-lib-containers-storage-overlay-50332a823e1c701c5030a7b946c05bb821f325e8726dda64de45022907676e5b-merged.mount: Deactivated successfully.
Dec  7 14:49:11 np0005549633 podman[76225]: 2025-12-07 19:49:11.810454006 +0000 UTC m=+0.796168601 container remove 1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d (image=quay.io/ceph/ceph:v19, name=trusting_kirch, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 14:49:11 np0005549633 systemd[1]: libpod-conmon-1254faa8c562779523fc3dcc647f5228371914e481b03b57ca6973b17a0dd37d.scope: Deactivated successfully.
Dec  7 14:49:11 np0005549633 podman[76370]: 2025-12-07 19:49:11.896418266 +0000 UTC m=+0.050735846 container create fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980 (image=quay.io/ceph/ceph:v19, name=confident_galileo, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 14:49:11 np0005549633 systemd[1]: Started libpod-conmon-fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980.scope.
Dec  7 14:49:11 np0005549633 podman[76370]: 2025-12-07 19:49:11.875073864 +0000 UTC m=+0.029391444 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:11 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2be01d701e740a40d986753a836ffd5afea96cc3dd30bce64e45af9dadfee7f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2be01d701e740a40d986753a836ffd5afea96cc3dd30bce64e45af9dadfee7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2be01d701e740a40d986753a836ffd5afea96cc3dd30bce64e45af9dadfee7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:12 np0005549633 podman[76370]: 2025-12-07 19:49:12.003640882 +0000 UTC m=+0.157958522 container init fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980 (image=quay.io/ceph/ceph:v19, name=confident_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:12 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:12 np0005549633 podman[76370]: 2025-12-07 19:49:12.016621102 +0000 UTC m=+0.170938702 container start fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980 (image=quay.io/ceph/ceph:v19, name=confident_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:12 np0005549633 podman[76370]: 2025-12-07 19:49:12.020510379 +0000 UTC m=+0.174828039 container attach fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980 (image=quay.io/ceph/ceph:v19, name=confident_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:12 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:12 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service crash spec with placement *
Dec  7 14:49:12 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:12 np0005549633 confident_galileo[76388]: Scheduled crash update...
Dec  7 14:49:12 np0005549633 systemd[1]: libpod-fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980.scope: Deactivated successfully.
Dec  7 14:49:12 np0005549633 conmon[76388]: conmon fe23182612517ab2c063 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980.scope/container/memory.events
Dec  7 14:49:12 np0005549633 podman[76370]: 2025-12-07 19:49:12.455932678 +0000 UTC m=+0.610250238 container died fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980 (image=quay.io/ceph/ceph:v19, name=confident_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 14:49:12 np0005549633 systemd[1]: var-lib-containers-storage-overlay-b2be01d701e740a40d986753a836ffd5afea96cc3dd30bce64e45af9dadfee7f-merged.mount: Deactivated successfully.
Dec  7 14:49:12 np0005549633 podman[76370]: 2025-12-07 19:49:12.504365618 +0000 UTC m=+0.658683178 container remove fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980 (image=quay.io/ceph/ceph:v19, name=confident_galileo, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:12 np0005549633 systemd[1]: libpod-conmon-fe23182612517ab2c063cb416bad2ec9a644a10fb45d05e5c5c8292839d2f980.scope: Deactivated successfully.
Dec  7 14:49:12 np0005549633 podman[76495]: 2025-12-07 19:49:12.592914898 +0000 UTC m=+0.057705827 container create f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c (image=quay.io/ceph/ceph:v19, name=eager_ptolemy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:12 np0005549633 systemd[1]: Started libpod-conmon-f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c.scope.
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: Saving service mon spec with placement count:5
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:12 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:12 np0005549633 podman[76495]: 2025-12-07 19:49:12.564162463 +0000 UTC m=+0.028953472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:12 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa50086ba80e8a07f736d24d2add17933500615117df5c0834373e99cc77da7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa50086ba80e8a07f736d24d2add17933500615117df5c0834373e99cc77da7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/caa50086ba80e8a07f736d24d2add17933500615117df5c0834373e99cc77da7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:12 np0005549633 podman[76495]: 2025-12-07 19:49:12.697228525 +0000 UTC m=+0.162019494 container init f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c (image=quay.io/ceph/ceph:v19, name=eager_ptolemy, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:12 np0005549633 podman[76495]: 2025-12-07 19:49:12.705076402 +0000 UTC m=+0.169867361 container start f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c (image=quay.io/ceph/ceph:v19, name=eager_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 14:49:12 np0005549633 podman[76495]: 2025-12-07 19:49:12.709435222 +0000 UTC m=+0.174226181 container attach f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c (image=quay.io/ceph/ceph:v19, name=eager_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 14:49:13 np0005549633 podman[76606]: 2025-12-07 19:49:13.014616407 +0000 UTC m=+0.072945069 container exec a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3601582291' entity='client.admin' 
Dec  7 14:49:13 np0005549633 podman[76495]: 2025-12-07 19:49:13.094102997 +0000 UTC m=+0.558893956 container died f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c (image=quay.io/ceph/ceph:v19, name=eager_ptolemy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 14:49:13 np0005549633 systemd[1]: libpod-f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c.scope: Deactivated successfully.
Dec  7 14:49:13 np0005549633 systemd[1]: var-lib-containers-storage-overlay-caa50086ba80e8a07f736d24d2add17933500615117df5c0834373e99cc77da7-merged.mount: Deactivated successfully.
Dec  7 14:49:13 np0005549633 podman[76606]: 2025-12-07 19:49:13.126454252 +0000 UTC m=+0.184782904 container exec_died a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 14:49:13 np0005549633 podman[76495]: 2025-12-07 19:49:13.163305523 +0000 UTC m=+0.628096452 container remove f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c (image=quay.io/ceph/ceph:v19, name=eager_ptolemy, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:13 np0005549633 systemd[1]: libpod-conmon-f32b15036f3d43f87c2e2d6a5e94357ae84ad0948b19640392c888b57af69d7c.scope: Deactivated successfully.
Dec  7 14:49:13 np0005549633 podman[76655]: 2025-12-07 19:49:13.244921481 +0000 UTC m=+0.057245156 container create 2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d (image=quay.io/ceph/ceph:v19, name=jovial_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 14:49:13 np0005549633 systemd[1]: Started libpod-conmon-2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d.scope.
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:13 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:13 np0005549633 podman[76655]: 2025-12-07 19:49:13.213962494 +0000 UTC m=+0.026286209 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:13 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723eee0a0b2b7b1bdf47449d99b97e5b8564c6284824537fca879f69b51537ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:13 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723eee0a0b2b7b1bdf47449d99b97e5b8564c6284824537fca879f69b51537ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:13 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/723eee0a0b2b7b1bdf47449d99b97e5b8564c6284824537fca879f69b51537ef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:13 np0005549633 podman[76655]: 2025-12-07 19:49:13.32764314 +0000 UTC m=+0.139966785 container init 2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d (image=quay.io/ceph/ceph:v19, name=jovial_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  7 14:49:13 np0005549633 podman[76655]: 2025-12-07 19:49:13.335150927 +0000 UTC m=+0.147474542 container start 2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d (image=quay.io/ceph/ceph:v19, name=jovial_chaum, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 14:49:13 np0005549633 podman[76655]: 2025-12-07 19:49:13.342733917 +0000 UTC m=+0.155057552 container attach 2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d (image=quay.io/ceph/ceph:v19, name=jovial_chaum, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: Saving service mgr spec with placement count:2
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: Saving service crash spec with placement *
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3601582291' entity='client.admin' 
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:13 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec  7 14:49:13 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:13 np0005549633 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76771 (sysctl)
Dec  7 14:49:13 np0005549633 podman[76655]: 2025-12-07 19:49:13.719207705 +0000 UTC m=+0.531531340 container died 2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d (image=quay.io/ceph/ceph:v19, name=jovial_chaum, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 14:49:13 np0005549633 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  7 14:49:13 np0005549633 systemd[1]: libpod-2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d.scope: Deactivated successfully.
Dec  7 14:49:13 np0005549633 systemd[1]: var-lib-containers-storage-overlay-723eee0a0b2b7b1bdf47449d99b97e5b8564c6284824537fca879f69b51537ef-merged.mount: Deactivated successfully.
Dec  7 14:49:13 np0005549633 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  7 14:49:13 np0005549633 podman[76655]: 2025-12-07 19:49:13.770172985 +0000 UTC m=+0.582496610 container remove 2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d (image=quay.io/ceph/ceph:v19, name=jovial_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 14:49:13 np0005549633 systemd[1]: libpod-conmon-2e536486daa8ce2aac55fc5701614fb783edab1bb91e343af776afb32867703d.scope: Deactivated successfully.
Dec  7 14:49:13 np0005549633 podman[76789]: 2025-12-07 19:49:13.910333293 +0000 UTC m=+0.113351797 container create 1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:13 np0005549633 podman[76789]: 2025-12-07 19:49:13.825838456 +0000 UTC m=+0.028856980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:13 np0005549633 systemd[1]: Started libpod-conmon-1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7.scope.
Dec  7 14:49:14 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:14 np0005549633 ceph-mgr[74680]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  7 14:49:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62d9f766dfa3a61ed22470807ee662867baded40a1b3466267a699ccb10ee80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62d9f766dfa3a61ed22470807ee662867baded40a1b3466267a699ccb10ee80/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62d9f766dfa3a61ed22470807ee662867baded40a1b3466267a699ccb10ee80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:14 np0005549633 podman[76789]: 2025-12-07 19:49:14.041078311 +0000 UTC m=+0.244096845 container init 1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:14 np0005549633 podman[76789]: 2025-12-07 19:49:14.053949408 +0000 UTC m=+0.256967922 container start 1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 14:49:14 np0005549633 podman[76789]: 2025-12-07 19:49:14.058174495 +0000 UTC m=+0.261193019 container attach 1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:49:14 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:14 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Added label _admin to host compute-0
Dec  7 14:49:14 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  7 14:49:14 np0005549633 optimistic_meninsky[76807]: Added label _admin to host compute-0
Dec  7 14:49:14 np0005549633 systemd[1]: libpod-1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7.scope: Deactivated successfully.
Dec  7 14:49:14 np0005549633 podman[76789]: 2025-12-07 19:49:14.45768464 +0000 UTC m=+0.660703144 container died 1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:14 np0005549633 systemd[1]: var-lib-containers-storage-overlay-f62d9f766dfa3a61ed22470807ee662867baded40a1b3466267a699ccb10ee80-merged.mount: Deactivated successfully.
Dec  7 14:49:14 np0005549633 podman[76789]: 2025-12-07 19:49:14.522893524 +0000 UTC m=+0.725912038 container remove 1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7 (image=quay.io/ceph/ceph:v19, name=optimistic_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 14:49:14 np0005549633 systemd[1]: libpod-conmon-1363a57e3dd00b45ecb5d28579d1247f55a932b42015ec36fe0b5be5fde7d4d7.scope: Deactivated successfully.
Dec  7 14:49:14 np0005549633 podman[76911]: 2025-12-07 19:49:14.613767828 +0000 UTC m=+0.059816665 container create e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2 (image=quay.io/ceph/ceph:v19, name=nifty_galileo, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:14 np0005549633 systemd[1]: Started libpod-conmon-e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2.scope.
Dec  7 14:49:14 np0005549633 podman[76911]: 2025-12-07 19:49:14.582249236 +0000 UTC m=+0.028298163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:14 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e676e81ef22415bf0c49837ebdc5bdafb84709cc4d1d02b576ab80d2503819a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e676e81ef22415bf0c49837ebdc5bdafb84709cc4d1d02b576ab80d2503819a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e676e81ef22415bf0c49837ebdc5bdafb84709cc4d1d02b576ab80d2503819a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:14 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:14 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:14 np0005549633 podman[76911]: 2025-12-07 19:49:14.7113938 +0000 UTC m=+0.157442667 container init e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2 (image=quay.io/ceph/ceph:v19, name=nifty_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 14:49:14 np0005549633 podman[76911]: 2025-12-07 19:49:14.719008261 +0000 UTC m=+0.165057138 container start e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2 (image=quay.io/ceph/ceph:v19, name=nifty_galileo, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 14:49:14 np0005549633 podman[76911]: 2025-12-07 19:49:14.731893787 +0000 UTC m=+0.177942624 container attach e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2 (image=quay.io/ceph/ceph:v19, name=nifty_galileo, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 14:49:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec  7 14:49:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/161214178' entity='client.admin' 
Dec  7 14:49:15 np0005549633 nifty_galileo[76940]: set mgr/dashboard/cluster/status
Dec  7 14:49:15 np0005549633 systemd[1]: libpod-e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2.scope: Deactivated successfully.
Dec  7 14:49:15 np0005549633 podman[76911]: 2025-12-07 19:49:15.234125895 +0000 UTC m=+0.680174802 container died e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2 (image=quay.io/ceph/ceph:v19, name=nifty_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:15 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8e676e81ef22415bf0c49837ebdc5bdafb84709cc4d1d02b576ab80d2503819a-merged.mount: Deactivated successfully.
Dec  7 14:49:15 np0005549633 podman[76911]: 2025-12-07 19:49:15.314658763 +0000 UTC m=+0.760707640 container remove e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2 (image=quay.io/ceph/ceph:v19, name=nifty_galileo, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 14:49:15 np0005549633 systemd[1]: libpod-conmon-e72bf2116dce708af0ba3bb78ec9b629213c265619adaa7d252733ed4fdb47b2.scope: Deactivated successfully.
Dec  7 14:49:15 np0005549633 podman[77069]: 2025-12-07 19:49:15.368585406 +0000 UTC m=+0.071593303 container create dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:15 np0005549633 systemd[1]: Started libpod-conmon-dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf.scope.
Dec  7 14:49:15 np0005549633 podman[77069]: 2025-12-07 19:49:15.341085355 +0000 UTC m=+0.044093312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:49:15 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:15 np0005549633 podman[77069]: 2025-12-07 19:49:15.460636632 +0000 UTC m=+0.163644569 container init dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 14:49:15 np0005549633 podman[77069]: 2025-12-07 19:49:15.46742022 +0000 UTC m=+0.170428147 container start dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wu, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 14:49:15 np0005549633 magical_wu[77088]: 167 167
Dec  7 14:49:15 np0005549633 systemd[1]: libpod-dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf.scope: Deactivated successfully.
Dec  7 14:49:15 np0005549633 podman[77069]: 2025-12-07 19:49:15.480350228 +0000 UTC m=+0.183358125 container attach dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 14:49:15 np0005549633 podman[77069]: 2025-12-07 19:49:15.482207 +0000 UTC m=+0.185214927 container died dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:15 np0005549633 systemd[1]: var-lib-containers-storage-overlay-9f00702be25db48b4c2810b247b96ede1201cbaa158225f6e0be49cf675f7cb5-merged.mount: Deactivated successfully.
Dec  7 14:49:15 np0005549633 podman[77069]: 2025-12-07 19:49:15.550421297 +0000 UTC m=+0.253429234 container remove dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:15 np0005549633 systemd[1]: libpod-conmon-dc05cedb9dcd1aeda4ca247df6c430de0c7d4de9f4428a601a41490710eaf2cf.scope: Deactivated successfully.
Dec  7 14:49:15 np0005549633 podman[77128]: 2025-12-07 19:49:15.771818624 +0000 UTC m=+0.030482175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:49:15 np0005549633 ceph-mon[74384]: Added label _admin to host compute-0
Dec  7 14:49:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:15 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/161214178' entity='client.admin' 
Dec  7 14:49:15 np0005549633 podman[77128]: 2025-12-07 19:49:15.878724482 +0000 UTC m=+0.137388043 container create 92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Dec  7 14:49:15 np0005549633 systemd[1]: Started libpod-conmon-92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926.scope.
Dec  7 14:49:15 np0005549633 python3[77154]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:15 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dbd230fdead0738c387cfc52ac2f1ee40b80a84f5ab5194b33c5096195a1ecb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dbd230fdead0738c387cfc52ac2f1ee40b80a84f5ab5194b33c5096195a1ecb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dbd230fdead0738c387cfc52ac2f1ee40b80a84f5ab5194b33c5096195a1ecb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9dbd230fdead0738c387cfc52ac2f1ee40b80a84f5ab5194b33c5096195a1ecb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:16 np0005549633 podman[77128]: 2025-12-07 19:49:16.009737976 +0000 UTC m=+0.268401517 container init 92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cray, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:16 np0005549633 ceph-mgr[74680]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  7 14:49:16 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:16 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  7 14:49:16 np0005549633 podman[77128]: 2025-12-07 19:49:16.019321142 +0000 UTC m=+0.277984703 container start 92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cray, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  7 14:49:16 np0005549633 podman[77128]: 2025-12-07 19:49:16.02429029 +0000 UTC m=+0.282953891 container attach 92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cray, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 14:49:16 np0005549633 podman[77160]: 2025-12-07 19:49:16.078657644 +0000 UTC m=+0.076438476 container create 7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968 (image=quay.io/ceph/ceph:v19, name=elastic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:49:16 np0005549633 systemd[1]: Started libpod-conmon-7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968.scope.
Dec  7 14:49:16 np0005549633 podman[77160]: 2025-12-07 19:49:16.055075512 +0000 UTC m=+0.052856324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:16 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:16 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3528b32bb47adcc9689dda4ce43d80cee500ed76be0b979be1ab1ad2301f0d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:16 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da3528b32bb47adcc9689dda4ce43d80cee500ed76be0b979be1ab1ad2301f0d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:16 np0005549633 podman[77160]: 2025-12-07 19:49:16.191581859 +0000 UTC m=+0.189362671 container init 7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968 (image=quay.io/ceph/ceph:v19, name=elastic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:16 np0005549633 podman[77160]: 2025-12-07 19:49:16.198293565 +0000 UTC m=+0.196074407 container start 7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968 (image=quay.io/ceph/ceph:v19, name=elastic_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 14:49:16 np0005549633 podman[77160]: 2025-12-07 19:49:16.20246115 +0000 UTC m=+0.200241942 container attach 7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968 (image=quay.io/ceph/ceph:v19, name=elastic_lamport, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec  7 14:49:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2093232539' entity='client.admin' 
Dec  7 14:49:16 np0005549633 systemd[1]: libpod-7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968.scope: Deactivated successfully.
Dec  7 14:49:16 np0005549633 conmon[77178]: conmon 7012f63f7726b2fc076b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968.scope/container/memory.events
Dec  7 14:49:16 np0005549633 podman[77160]: 2025-12-07 19:49:16.630540106 +0000 UTC m=+0.628320898 container died 7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968 (image=quay.io/ceph/ceph:v19, name=elastic_lamport, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  7 14:49:16 np0005549633 systemd[1]: var-lib-containers-storage-overlay-da3528b32bb47adcc9689dda4ce43d80cee500ed76be0b979be1ab1ad2301f0d-merged.mount: Deactivated successfully.
Dec  7 14:49:16 np0005549633 podman[77160]: 2025-12-07 19:49:16.672533118 +0000 UTC m=+0.670313910 container remove 7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968 (image=quay.io/ceph/ceph:v19, name=elastic_lamport, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 14:49:16 np0005549633 systemd[1]: libpod-conmon-7012f63f7726b2fc076bbb1df96c5e59b7dc975d6df5a2afb26aa61739f63968.scope: Deactivated successfully.
Dec  7 14:49:16 np0005549633 happy_cray[77157]: [
Dec  7 14:49:16 np0005549633 happy_cray[77157]:    {
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "available": false,
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "being_replaced": false,
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "ceph_device_lvm": false,
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "lsm_data": {},
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "lvs": [],
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "path": "/dev/sr0",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "rejected_reasons": [
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "Has a FileSystem",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "Insufficient space (<5GB)"
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        ],
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        "sys_api": {
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "actuators": null,
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "device_nodes": [
Dec  7 14:49:16 np0005549633 happy_cray[77157]:                "sr0"
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            ],
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "devname": "sr0",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "human_readable_size": "482.00 KB",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "id_bus": "ata",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "model": "QEMU DVD-ROM",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "nr_requests": "2",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "parent": "/dev/sr0",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "partitions": {},
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "path": "/dev/sr0",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "removable": "1",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "rev": "2.5+",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "ro": "0",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "rotational": "1",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "sas_address": "",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "sas_device_handle": "",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "scheduler_mode": "mq-deadline",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "sectors": 0,
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "sectorsize": "2048",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "size": 493568.0,
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "support_discard": "2048",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "type": "disk",
Dec  7 14:49:16 np0005549633 happy_cray[77157]:            "vendor": "QEMU"
Dec  7 14:49:16 np0005549633 happy_cray[77157]:        }
Dec  7 14:49:16 np0005549633 happy_cray[77157]:    }
Dec  7 14:49:16 np0005549633 happy_cray[77157]: ]
Dec  7 14:49:16 np0005549633 ceph-mon[74384]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  7 14:49:16 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2093232539' entity='client.admin' 
Dec  7 14:49:16 np0005549633 systemd[1]: libpod-92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926.scope: Deactivated successfully.
Dec  7 14:49:16 np0005549633 podman[77128]: 2025-12-07 19:49:16.895628152 +0000 UTC m=+1.154291713 container died 92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:16 np0005549633 systemd[1]: var-lib-containers-storage-overlay-9dbd230fdead0738c387cfc52ac2f1ee40b80a84f5ab5194b33c5096195a1ecb-merged.mount: Deactivated successfully.
Dec  7 14:49:16 np0005549633 podman[77128]: 2025-12-07 19:49:16.971022508 +0000 UTC m=+1.229686029 container remove 92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_cray, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:49:16 np0005549633 systemd[1]: libpod-conmon-92d39013e5c22c5c5e87e4968eb69c65fe03ae9299922c1eb5697030ecb0c926.scope: Deactivated successfully.
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:17 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:49:17 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:49:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:17 np0005549633 ansible-async_wrapper.py[78487]: Invoked with j864007878180 30 /home/zuul/.ansible/tmp/ansible-tmp-1765136957.1107721-37143-25273174432668/AnsiballZ_command.py _
Dec  7 14:49:17 np0005549633 ansible-async_wrapper.py[78539]: Starting module and watcher
Dec  7 14:49:17 np0005549633 ansible-async_wrapper.py[78539]: Start watching 78541 (30)
Dec  7 14:49:17 np0005549633 ansible-async_wrapper.py[78541]: Start module (78541)
Dec  7 14:49:17 np0005549633 ansible-async_wrapper.py[78487]: Return async_wrapper task started.
Dec  7 14:49:17 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:49:17 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:49:18 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:18 np0005549633 python3[78542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:49:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:18 np0005549633 ceph-mon[74384]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:49:18 np0005549633 podman[78593]: 2025-12-07 19:49:18.114996814 +0000 UTC m=+0.062889202 container create 3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9 (image=quay.io/ceph/ceph:v19, name=crazy_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:18 np0005549633 systemd[1]: Started libpod-conmon-3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9.scope.
Dec  7 14:49:18 np0005549633 podman[78593]: 2025-12-07 19:49:18.084114049 +0000 UTC m=+0.032006477 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:18 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:18 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791197b10906a96a8932c4d8cdd18ef7f00d15f0c4cbc4f20a674a05d53d390a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:18 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/791197b10906a96a8932c4d8cdd18ef7f00d15f0c4cbc4f20a674a05d53d390a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:18 np0005549633 podman[78593]: 2025-12-07 19:49:18.220647188 +0000 UTC m=+0.168539576 container init 3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9 (image=quay.io/ceph/ceph:v19, name=crazy_jones, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 14:49:18 np0005549633 podman[78593]: 2025-12-07 19:49:18.234895292 +0000 UTC m=+0.182787670 container start 3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9 (image=quay.io/ceph/ceph:v19, name=crazy_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 14:49:18 np0005549633 podman[78593]: 2025-12-07 19:49:18.239476668 +0000 UTC m=+0.187369026 container attach 3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9 (image=quay.io/ceph/ceph:v19, name=crazy_jones, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:18 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 14:49:18 np0005549633 crazy_jones[78639]: 
Dec  7 14:49:18 np0005549633 crazy_jones[78639]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 14:49:18 np0005549633 systemd[1]: libpod-3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9.scope: Deactivated successfully.
Dec  7 14:49:18 np0005549633 podman[78593]: 2025-12-07 19:49:18.648277001 +0000 UTC m=+0.596169399 container died 3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9 (image=quay.io/ceph/ceph:v19, name=crazy_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:18 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:49:18 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:49:18 np0005549633 systemd[1]: var-lib-containers-storage-overlay-791197b10906a96a8932c4d8cdd18ef7f00d15f0c4cbc4f20a674a05d53d390a-merged.mount: Deactivated successfully.
Dec  7 14:49:18 np0005549633 podman[78593]: 2025-12-07 19:49:18.760326071 +0000 UTC m=+0.708218419 container remove 3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9 (image=quay.io/ceph/ceph:v19, name=crazy_jones, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 14:49:18 np0005549633 systemd[1]: libpod-conmon-3e5e823b496d251e20db2d948c416af6785063e274752dd091bb97e0a90217f9.scope: Deactivated successfully.
Dec  7 14:49:18 np0005549633 ansible-async_wrapper.py[78541]: Module complete (78541)
Dec  7 14:49:19 np0005549633 ceph-mon[74384]: Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:49:19 np0005549633 ceph-mon[74384]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:49:19 np0005549633 python3[79020]: ansible-ansible.legacy.async_status Invoked with jid=j864007878180.78487 mode=status _async_dir=/root/.ansible_async
Dec  7 14:49:19 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:49:19 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:49:19 np0005549633 python3[79185]: ansible-ansible.legacy.async_status Invoked with jid=j864007878180.78487 mode=cleanup _async_dir=/root/.ansible_async
Dec  7 14:49:20 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:20 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 73ad6483-946d-4ef8-a08f-535becda89e8 (Updating crash deployment (+1 -> 1))
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:20 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  7 14:49:20 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  7 14:49:20 np0005549633 python3[79386]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  7 14:49:20 np0005549633 podman[79509]: 2025-12-07 19:49:20.845370257 +0000 UTC m=+0.060142165 container create 4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 14:49:20 np0005549633 python3[79495]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:20 np0005549633 systemd[1]: Started libpod-conmon-4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd.scope.
Dec  7 14:49:20 np0005549633 podman[79509]: 2025-12-07 19:49:20.812443397 +0000 UTC m=+0.027215305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:49:20 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:20 np0005549633 podman[79509]: 2025-12-07 19:49:20.932059496 +0000 UTC m=+0.146831394 container init 4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kalam, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Dec  7 14:49:20 np0005549633 podman[79509]: 2025-12-07 19:49:20.940817949 +0000 UTC m=+0.155589857 container start 4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:20 np0005549633 frosty_kalam[79527]: 167 167
Dec  7 14:49:20 np0005549633 podman[79509]: 2025-12-07 19:49:20.945464558 +0000 UTC m=+0.160236496 container attach 4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 14:49:20 np0005549633 systemd[1]: libpod-4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd.scope: Deactivated successfully.
Dec  7 14:49:20 np0005549633 podman[79509]: 2025-12-07 19:49:20.946771614 +0000 UTC m=+0.161543532 container died 4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kalam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:20 np0005549633 systemd[1]: var-lib-containers-storage-overlay-86b63c22796de4d92eecccced7a5cba85b5c222add0dab6061b76f4ec4854e5d-merged.mount: Deactivated successfully.
Dec  7 14:49:20 np0005549633 podman[79526]: 2025-12-07 19:49:20.989963048 +0000 UTC m=+0.084245752 container create 03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8 (image=quay.io/ceph/ceph:v19, name=goofy_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:21 np0005549633 podman[79509]: 2025-12-07 19:49:21.031120488 +0000 UTC m=+0.245892376 container remove 4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_kalam, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 14:49:21 np0005549633 systemd[1]: Started libpod-conmon-03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8.scope.
Dec  7 14:49:21 np0005549633 systemd[1]: libpod-conmon-4c71f15172a76ee4c808667f3a11f8bca09e5d966f28c32dd4c1661a4c6fdfcd.scope: Deactivated successfully.
Dec  7 14:49:21 np0005549633 podman[79526]: 2025-12-07 19:49:20.94917983 +0000 UTC m=+0.043462544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:21 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b22122841dbe728e55e0aa3d5ecfb3e5581d965a32fd47be896166f1817bb0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b22122841dbe728e55e0aa3d5ecfb3e5581d965a32fd47be896166f1817bb0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b22122841dbe728e55e0aa3d5ecfb3e5581d965a32fd47be896166f1817bb0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:21 np0005549633 podman[79526]: 2025-12-07 19:49:21.079684631 +0000 UTC m=+0.173967355 container init 03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8 (image=quay.io/ceph/ceph:v19, name=goofy_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:49:21 np0005549633 podman[79526]: 2025-12-07 19:49:21.088744762 +0000 UTC m=+0.183027456 container start 03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8 (image=quay.io/ceph/ceph:v19, name=goofy_cohen, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  7 14:49:21 np0005549633 podman[79526]: 2025-12-07 19:49:21.093406201 +0000 UTC m=+0.187688885 container attach 03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8 (image=quay.io/ceph/ceph:v19, name=goofy_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:21 np0005549633 systemd[1]: Reloading.
Dec  7 14:49:21 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:21 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:21 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:21 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 14:49:21 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 14:49:21 np0005549633 ceph-mon[74384]: Deploying daemon crash.compute-0 on compute-0
Dec  7 14:49:21 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:49:21 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:49:21 np0005549633 systemd[1]: Reloading.
Dec  7 14:49:21 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 14:49:21 np0005549633 goofy_cohen[79556]: 
Dec  7 14:49:21 np0005549633 goofy_cohen[79556]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 14:49:21 np0005549633 podman[79526]: 2025-12-07 19:49:21.504434825 +0000 UTC m=+0.598717539 container died 03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8 (image=quay.io/ceph/ceph:v19, name=goofy_cohen, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 14:49:21 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:49:21 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:49:21 np0005549633 systemd[1]: libpod-03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8.scope: Deactivated successfully.
Dec  7 14:49:21 np0005549633 systemd[1]: var-lib-containers-storage-overlay-e8b22122841dbe728e55e0aa3d5ecfb3e5581d965a32fd47be896166f1817bb0-merged.mount: Deactivated successfully.
Dec  7 14:49:21 np0005549633 podman[79526]: 2025-12-07 19:49:21.735710025 +0000 UTC m=+0.829992729 container remove 03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8 (image=quay.io/ceph/ceph:v19, name=goofy_cohen, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 14:49:21 np0005549633 systemd[1]: Starting Ceph crash.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:49:21 np0005549633 systemd[1]: libpod-conmon-03471235e8fd9e2b33e546c3436e6c323a625a4f28aec554bfefe037f6a029f8.scope: Deactivated successfully.
Dec  7 14:49:22 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:22 np0005549633 podman[79718]: 2025-12-07 19:49:22.071853066 +0000 UTC m=+0.056538275 container create ca360f912e5e17321e840eba5f5ec87852df0377ed8c57d5763283a2c5cb32a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 14:49:22 np0005549633 podman[79718]: 2025-12-07 19:49:22.044848099 +0000 UTC m=+0.029533378 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:49:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32977149b5419370925b1fcdd1e46b860c46cfe9ffd0e540b15198fbe5c3935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32977149b5419370925b1fcdd1e46b860c46cfe9ffd0e540b15198fbe5c3935/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32977149b5419370925b1fcdd1e46b860c46cfe9ffd0e540b15198fbe5c3935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e32977149b5419370925b1fcdd1e46b860c46cfe9ffd0e540b15198fbe5c3935/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:22 np0005549633 podman[79718]: 2025-12-07 19:49:22.161728453 +0000 UTC m=+0.146413712 container init ca360f912e5e17321e840eba5f5ec87852df0377ed8c57d5763283a2c5cb32a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:22 np0005549633 podman[79718]: 2025-12-07 19:49:22.175635928 +0000 UTC m=+0.160321147 container start ca360f912e5e17321e840eba5f5ec87852df0377ed8c57d5763283a2c5cb32a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:22 np0005549633 bash[79718]: ca360f912e5e17321e840eba5f5ec87852df0377ed8c57d5763283a2c5cb32a0
Dec  7 14:49:22 np0005549633 systemd[1]: Started Ceph crash.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:22 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 73ad6483-946d-4ef8-a08f-535becda89e8 (Updating crash deployment (+1 -> 1))
Dec  7 14:49:22 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 73ad6483-946d-4ef8-a08f-535becda89e8 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:22 np0005549633 python3[79761]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: 2025-12-07T19:49:22.353+0000 7f7d9340d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: 2025-12-07T19:49:22.353+0000 7f7d9340d640 -1 AuthRegistry(0x7f7d8c069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: 2025-12-07T19:49:22.354+0000 7f7d9340d640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: 2025-12-07T19:49:22.354+0000 7f7d9340d640 -1 AuthRegistry(0x7f7d9340bff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: 2025-12-07T19:49:22.355+0000 7f7d91182640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: 2025-12-07T19:49:22.355+0000 7f7d9340d640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  7 14:49:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-crash-compute-0[79757]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  7 14:49:22 np0005549633 podman[79767]: 2025-12-07 19:49:22.387801909 +0000 UTC m=+0.048085782 container create 0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7 (image=quay.io/ceph/ceph:v19, name=frosty_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 14:49:22 np0005549633 systemd[1]: Started libpod-conmon-0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7.scope.
Dec  7 14:49:22 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a1126fa0e305388cb4b450a5723c53eb14d1ee802f9302e073e6cf74148260/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a1126fa0e305388cb4b450a5723c53eb14d1ee802f9302e073e6cf74148260/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a1126fa0e305388cb4b450a5723c53eb14d1ee802f9302e073e6cf74148260/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:22 np0005549633 podman[79767]: 2025-12-07 19:49:22.369175153 +0000 UTC m=+0.029459026 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:22 np0005549633 podman[79767]: 2025-12-07 19:49:22.479488956 +0000 UTC m=+0.139772849 container init 0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7 (image=quay.io/ceph/ceph:v19, name=frosty_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 14:49:22 np0005549633 podman[79767]: 2025-12-07 19:49:22.493667928 +0000 UTC m=+0.153951811 container start 0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7 (image=quay.io/ceph/ceph:v19, name=frosty_nightingale, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 14:49:22 np0005549633 podman[79767]: 2025-12-07 19:49:22.497818363 +0000 UTC m=+0.158102446 container attach 0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7 (image=quay.io/ceph/ceph:v19, name=frosty_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec  7 14:49:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2451131506' entity='client.admin' 
Dec  7 14:49:22 np0005549633 ansible-async_wrapper.py[78539]: Done in kid B.
Dec  7 14:49:22 np0005549633 systemd[1]: libpod-0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7.scope: Deactivated successfully.
Dec  7 14:49:22 np0005549633 podman[79915]: 2025-12-07 19:49:22.897765591 +0000 UTC m=+0.038859687 container died 0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7 (image=quay.io/ceph/ceph:v19, name=frosty_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 14:49:22 np0005549633 systemd[1]: var-lib-containers-storage-overlay-25a1126fa0e305388cb4b450a5723c53eb14d1ee802f9302e073e6cf74148260-merged.mount: Deactivated successfully.
Dec  7 14:49:22 np0005549633 podman[79925]: 2025-12-07 19:49:22.959525569 +0000 UTC m=+0.065398261 container remove 0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7 (image=quay.io/ceph/ceph:v19, name=frosty_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 14:49:22 np0005549633 systemd[1]: libpod-conmon-0bd4499e7cbe06f999a3b3471b894b459c59c2c3ca0f1835725c13680b1e35c7.scope: Deactivated successfully.
Dec  7 14:49:23 np0005549633 podman[79986]: 2025-12-07 19:49:23.208396976 +0000 UTC m=+0.070701178 container exec a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2451131506' entity='client.admin' 
Dec  7 14:49:23 np0005549633 podman[79986]: 2025-12-07 19:49:23.317182057 +0000 UTC m=+0.179486289 container exec_died a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:23 np0005549633 python3[80012]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:23 np0005549633 podman[80026]: 2025-12-07 19:49:23.399744102 +0000 UTC m=+0.048894765 container create 86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54 (image=quay.io/ceph/ceph:v19, name=vigilant_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:23 np0005549633 systemd[1]: Started libpod-conmon-86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54.scope.
Dec  7 14:49:23 np0005549633 podman[80026]: 2025-12-07 19:49:23.378815802 +0000 UTC m=+0.027966485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:23 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:23 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761b3f2f6245d056b516383c534817ff5cc246acdd60190f6203efd9ff468d9c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:23 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761b3f2f6245d056b516383c534817ff5cc246acdd60190f6203efd9ff468d9c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:23 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761b3f2f6245d056b516383c534817ff5cc246acdd60190f6203efd9ff468d9c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:23 np0005549633 podman[80026]: 2025-12-07 19:49:23.501119536 +0000 UTC m=+0.150270189 container init 86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54 (image=quay.io/ceph/ceph:v19, name=vigilant_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Dec  7 14:49:23 np0005549633 podman[80026]: 2025-12-07 19:49:23.509964191 +0000 UTC m=+0.159114844 container start 86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54 (image=quay.io/ceph/ceph:v19, name=vigilant_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:23 np0005549633 podman[80026]: 2025-12-07 19:49:23.513708175 +0000 UTC m=+0.162858828 container attach 86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54 (image=quay.io/ceph/ceph:v19, name=vigilant_wilson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:23 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  7 14:49:23 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:23 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 14:49:23 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec  7 14:49:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3876917096' entity='client.admin' 
Dec  7 14:49:23 np0005549633 systemd[1]: libpod-86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54.scope: Deactivated successfully.
Dec  7 14:49:23 np0005549633 podman[80026]: 2025-12-07 19:49:23.924313626 +0000 UTC m=+0.573464319 container died 86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54 (image=quay.io/ceph/ceph:v19, name=vigilant_wilson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:49:23 np0005549633 systemd[1]: var-lib-containers-storage-overlay-761b3f2f6245d056b516383c534817ff5cc246acdd60190f6203efd9ff468d9c-merged.mount: Deactivated successfully.
Dec  7 14:49:23 np0005549633 podman[80026]: 2025-12-07 19:49:23.981688094 +0000 UTC m=+0.630838787 container remove 86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54 (image=quay.io/ceph/ceph:v19, name=vigilant_wilson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 14:49:23 np0005549633 systemd[1]: libpod-conmon-86d78d146857389fdf2e16ab1f3d0db3b23263b48a8acbd1fd2978ac75419e54.scope: Deactivated successfully.
Dec  7 14:49:24 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:24 np0005549633 podman[80212]: 2025-12-07 19:49:24.259674757 +0000 UTC m=+0.062268085 container create 648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5 (image=quay.io/ceph/ceph:v19, name=xenodochial_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 14:49:24 np0005549633 systemd[1]: Started libpod-conmon-648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5.scope.
Dec  7 14:49:24 np0005549633 podman[80212]: 2025-12-07 19:49:24.231433565 +0000 UTC m=+0.034026903 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:24 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:24 np0005549633 podman[80212]: 2025-12-07 19:49:24.35739411 +0000 UTC m=+0.159987488 container init 648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5 (image=quay.io/ceph/ceph:v19, name=xenodochial_bhaskara, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 14:49:24 np0005549633 podman[80212]: 2025-12-07 19:49:24.370323099 +0000 UTC m=+0.172916407 container start 648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5 (image=quay.io/ceph/ceph:v19, name=xenodochial_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:49:24 np0005549633 podman[80212]: 2025-12-07 19:49:24.374523995 +0000 UTC m=+0.177117383 container attach 648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5 (image=quay.io/ceph/ceph:v19, name=xenodochial_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 14:49:24 np0005549633 xenodochial_bhaskara[80255]: 167 167
Dec  7 14:49:24 np0005549633 systemd[1]: libpod-648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5.scope: Deactivated successfully.
Dec  7 14:49:24 np0005549633 conmon[80255]: conmon 648f9e9988fb05cddbb0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5.scope/container/memory.events
Dec  7 14:49:24 np0005549633 podman[80212]: 2025-12-07 19:49:24.379343718 +0000 UTC m=+0.181937006 container died 648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5 (image=quay.io/ceph/ceph:v19, name=xenodochial_bhaskara, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:49:24 np0005549633 systemd[1]: var-lib-containers-storage-overlay-3cea1e31dffe3d65049db4a2e22d2ac83645afd07d00a647ef2deb4cbd8eae8a-merged.mount: Deactivated successfully.
Dec  7 14:49:24 np0005549633 python3[80252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:24 np0005549633 podman[80212]: 2025-12-07 19:49:24.435412729 +0000 UTC m=+0.238006047 container remove 648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5 (image=quay.io/ceph/ceph:v19, name=xenodochial_bhaskara, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 14:49:24 np0005549633 systemd[1]: libpod-conmon-648f9e9988fb05cddbb0dac32b7427e2ec1ecfc1511124137a4e5620ba06c6b5.scope: Deactivated successfully.
Dec  7 14:49:24 np0005549633 podman[80272]: 2025-12-07 19:49:24.493210118 +0000 UTC m=+0.047838844 container create 74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:49:24 np0005549633 systemd[1]: Started libpod-conmon-74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654.scope.
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.dyzcyj (unknown last config time)...
Dec  7 14:49:24 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.dyzcyj (unknown last config time)...
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.dyzcyj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dyzcyj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:24 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.dyzcyj on compute-0
Dec  7 14:49:24 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.dyzcyj on compute-0
Dec  7 14:49:24 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dfc9bec4a13d9d9df5b7e44617be985d005ac43d49c2c031c8d31815c04f07/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dfc9bec4a13d9d9df5b7e44617be985d005ac43d49c2c031c8d31815c04f07/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19dfc9bec4a13d9d9df5b7e44617be985d005ac43d49c2c031c8d31815c04f07/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:24 np0005549633 podman[80272]: 2025-12-07 19:49:24.473438961 +0000 UTC m=+0.028067717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:24 np0005549633 podman[80272]: 2025-12-07 19:49:24.581003288 +0000 UTC m=+0.135632004 container init 74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:24 np0005549633 podman[80272]: 2025-12-07 19:49:24.586249963 +0000 UTC m=+0.140878729 container start 74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 14:49:24 np0005549633 podman[80272]: 2025-12-07 19:49:24.590980024 +0000 UTC m=+0.145608780 container attach 74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3876917096' entity='client.admin' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.dyzcyj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 14:49:25 np0005549633 podman[80377]: 2025-12-07 19:49:25.21386652 +0000 UTC m=+0.064026303 container create 539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67 (image=quay.io/ceph/ceph:v19, name=vigorous_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:49:25 np0005549633 systemd[1]: Started libpod-conmon-539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67.scope.
Dec  7 14:49:25 np0005549633 podman[80377]: 2025-12-07 19:49:25.187618054 +0000 UTC m=+0.037777877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:25 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:25 np0005549633 podman[80377]: 2025-12-07 19:49:25.324280266 +0000 UTC m=+0.174440009 container init 539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67 (image=quay.io/ceph/ceph:v19, name=vigorous_fermi, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:25 np0005549633 podman[80377]: 2025-12-07 19:49:25.329333126 +0000 UTC m=+0.179492899 container start 539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67 (image=quay.io/ceph/ceph:v19, name=vigorous_fermi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 14:49:25 np0005549633 podman[80377]: 2025-12-07 19:49:25.334029686 +0000 UTC m=+0.184189509 container attach 539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67 (image=quay.io/ceph/ceph:v19, name=vigorous_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:49:25 np0005549633 vigorous_fermi[80394]: 167 167
Dec  7 14:49:25 np0005549633 systemd[1]: libpod-539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67.scope: Deactivated successfully.
Dec  7 14:49:25 np0005549633 podman[80377]: 2025-12-07 19:49:25.33891144 +0000 UTC m=+0.189071203 container died 539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67 (image=quay.io/ceph/ceph:v19, name=vigorous_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/159938671' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  7 14:49:25 np0005549633 systemd[1]: var-lib-containers-storage-overlay-61540465c37e57c0743ef9463404cf06ff353e3af7c08c354cd499699dafc7c9-merged.mount: Deactivated successfully.
Dec  7 14:49:25 np0005549633 podman[80377]: 2025-12-07 19:49:25.394320294 +0000 UTC m=+0.244480037 container remove 539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67 (image=quay.io/ceph/ceph:v19, name=vigorous_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:25 np0005549633 systemd[1]: libpod-conmon-539f9820826de8eea3b33775d8ab0b6c7b269838fbfde75ad2a6bc7117f06b67.scope: Deactivated successfully.
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: Reconfiguring mgr.compute-0.dyzcyj (unknown last config time)...
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: Reconfiguring daemon mgr.compute-0.dyzcyj on compute-0
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/159938671' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:25 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  7 14:49:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:49:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/159938671' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  7 14:49:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  7 14:49:26 np0005549633 sharp_euclid[80287]: set require_min_compat_client to mimic
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:49:26 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:49:26 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 1 completed events
Dec  7 14:49:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:49:26 np0005549633 systemd[1]: libpod-74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654.scope: Deactivated successfully.
Dec  7 14:49:26 np0005549633 podman[80272]: 2025-12-07 19:49:26.139102873 +0000 UTC m=+1.693731639 container died 74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 14:49:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:26 np0005549633 systemd[1]: var-lib-containers-storage-overlay-19dfc9bec4a13d9d9df5b7e44617be985d005ac43d49c2c031c8d31815c04f07-merged.mount: Deactivated successfully.
Dec  7 14:49:26 np0005549633 podman[80272]: 2025-12-07 19:49:26.197255743 +0000 UTC m=+1.751884499 container remove 74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654 (image=quay.io/ceph/ceph:v19, name=sharp_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 14:49:26 np0005549633 systemd[1]: libpod-conmon-74ba38818abf81f9b32da855248f85e65faf51bf7f91ac4a534f95ad632b8654.scope: Deactivated successfully.
Dec  7 14:49:26 np0005549633 python3[80474]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:27 np0005549633 podman[80475]: 2025-12-07 19:49:27.021733786 +0000 UTC m=+0.062265023 container create d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae (image=quay.io/ceph/ceph:v19, name=priceless_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 14:49:27 np0005549633 systemd[1]: Started libpod-conmon-d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae.scope.
Dec  7 14:49:27 np0005549633 podman[80475]: 2025-12-07 19:49:26.996829658 +0000 UTC m=+0.037360905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:27 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:27 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e4611e36795b065c9fa5e04b57de6a1bf0573860550e5db23f9637b5eca2f8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:27 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e4611e36795b065c9fa5e04b57de6a1bf0573860550e5db23f9637b5eca2f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:27 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53e4611e36795b065c9fa5e04b57de6a1bf0573860550e5db23f9637b5eca2f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:27 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/159938671' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  7 14:49:27 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:27 np0005549633 podman[80475]: 2025-12-07 19:49:27.128779609 +0000 UTC m=+0.169310856 container init d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae (image=quay.io/ceph/ceph:v19, name=priceless_shtern, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:49:27 np0005549633 podman[80475]: 2025-12-07 19:49:27.140337189 +0000 UTC m=+0.180868386 container start d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae (image=quay.io/ceph/ceph:v19, name=priceless_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 14:49:27 np0005549633 podman[80475]: 2025-12-07 19:49:27.144125174 +0000 UTC m=+0.184656421 container attach d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae (image=quay.io/ceph/ceph:v19, name=priceless_shtern, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:49:27 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:49:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:28 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:28 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Added host compute-0
Dec  7 14:49:28 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 14:49:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:29 np0005549633 ceph-mon[74384]: Added host compute-0
Dec  7 14:49:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:29 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec  7 14:49:29 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec  7 14:49:30 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:30 np0005549633 ceph-mon[74384]: Deploying cephadm binary to compute-1
Dec  7 14:49:32 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:33 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:33 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:33 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Added host compute-1
Dec  7 14:49:33 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Added host compute-1
Dec  7 14:49:34 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:34 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:49:34 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:34 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:34 np0005549633 ceph-mon[74384]: Added host compute-1
Dec  7 14:49:34 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:34 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:49:34 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:35 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec  7 14:49:35 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec  7 14:49:35 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:36 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:36 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:49:36 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:36 np0005549633 ceph-mon[74384]: Deploying cephadm binary to compute-2
Dec  7 14:49:36 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:38 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Added host compute-2
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Added host compute-2
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:39 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:39 np0005549633 priceless_shtern[80490]: Added host 'compute-0' with addr '192.168.122.100'
Dec  7 14:49:39 np0005549633 priceless_shtern[80490]: Added host 'compute-1' with addr '192.168.122.101'
Dec  7 14:49:39 np0005549633 priceless_shtern[80490]: Added host 'compute-2' with addr '192.168.122.102'
Dec  7 14:49:39 np0005549633 priceless_shtern[80490]: Scheduled mon update...
Dec  7 14:49:39 np0005549633 priceless_shtern[80490]: Scheduled mgr update...
Dec  7 14:49:39 np0005549633 priceless_shtern[80490]: Scheduled osd.default_drive_group update...
Dec  7 14:49:39 np0005549633 systemd[1]: libpod-d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae.scope: Deactivated successfully.
Dec  7 14:49:39 np0005549633 podman[80475]: 2025-12-07 19:49:39.505446001 +0000 UTC m=+12.545977198 container died d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae (image=quay.io/ceph/ceph:v19, name=priceless_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 14:49:39 np0005549633 systemd[1]: var-lib-containers-storage-overlay-53e4611e36795b065c9fa5e04b57de6a1bf0573860550e5db23f9637b5eca2f8-merged.mount: Deactivated successfully.
Dec  7 14:49:39 np0005549633 podman[80475]: 2025-12-07 19:49:39.557836151 +0000 UTC m=+12.598367378 container remove d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae (image=quay.io/ceph/ceph:v19, name=priceless_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 14:49:39 np0005549633 systemd[1]: libpod-conmon-d55529564b1624f0ab1d5fc73bea11caece45954abfae55491b673b780566fae.scope: Deactivated successfully.
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:39 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:40 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:40 np0005549633 python3[80647]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:49:40 np0005549633 podman[80649]: 2025-12-07 19:49:40.15262507 +0000 UTC m=+0.081774865 container create fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2 (image=quay.io/ceph/ceph:v19, name=suspicious_haibt, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 14:49:40 np0005549633 systemd[1]: Started libpod-conmon-fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2.scope.
Dec  7 14:49:40 np0005549633 podman[80649]: 2025-12-07 19:49:40.123220366 +0000 UTC m=+0.052370211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:49:40 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:49:40 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83007942e1748eccf5e3d4c7ea5650ceb686b3b9c342c223389e7c4bbdb8293a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:40 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83007942e1748eccf5e3d4c7ea5650ceb686b3b9c342c223389e7c4bbdb8293a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:40 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83007942e1748eccf5e3d4c7ea5650ceb686b3b9c342c223389e7c4bbdb8293a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:49:40 np0005549633 podman[80649]: 2025-12-07 19:49:40.259352133 +0000 UTC m=+0.188501988 container init fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2 (image=quay.io/ceph/ceph:v19, name=suspicious_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 14:49:40 np0005549633 podman[80649]: 2025-12-07 19:49:40.273167985 +0000 UTC m=+0.202317740 container start fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2 (image=quay.io/ceph/ceph:v19, name=suspicious_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 14:49:40 np0005549633 podman[80649]: 2025-12-07 19:49:40.276680912 +0000 UTC m=+0.205830667 container attach fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2 (image=quay.io/ceph/ceph:v19, name=suspicious_haibt, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: Added host compute-2
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: Marking host: compute-1 for OSDSpec preview refresh.
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 14:49:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565376178' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 14:49:40 np0005549633 suspicious_haibt[80665]: 
Dec  7 14:49:40 np0005549633 suspicious_haibt[80665]: {"fsid":"a8ac706f-8288-541e-8e56-e1124d9b483d","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":63,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-07T19:48:35:442933+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-07T19:48:35.445282+0000","services":{}},"progress_events":{}}
Dec  7 14:49:40 np0005549633 systemd[1]: libpod-fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2.scope: Deactivated successfully.
Dec  7 14:49:40 np0005549633 podman[80649]: 2025-12-07 19:49:40.764537022 +0000 UTC m=+0.693686817 container died fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2 (image=quay.io/ceph/ceph:v19, name=suspicious_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 14:49:40 np0005549633 systemd[1]: var-lib-containers-storage-overlay-83007942e1748eccf5e3d4c7ea5650ceb686b3b9c342c223389e7c4bbdb8293a-merged.mount: Deactivated successfully.
Dec  7 14:49:40 np0005549633 podman[80649]: 2025-12-07 19:49:40.94658596 +0000 UTC m=+0.875735725 container remove fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2 (image=quay.io/ceph/ceph:v19, name=suspicious_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  7 14:49:40 np0005549633 systemd[1]: libpod-conmon-fc73f5e1ba743352b57756ac2e42de48e4a8a45b14e086be9a12b36dfb0a2ef2.scope: Deactivated successfully.
Dec  7 14:49:42 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:44 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:46 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:48 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:50 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:52 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:54 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:49:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:49:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:49:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:49:56
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] do_upmap
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] No pools available
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:49:56 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:49:56 np0005549633 ceph-mon[74384]: Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:49:57 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:49:57 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:49:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:49:57 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:49:57 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 1bd916ab-37d1-490d-88b5-71d4e41a6c7b (Updating crash deployment (+1 -> 2))
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:49:58.388+0000 7f9571538640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: service_name: mon
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: placement:
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  hosts:
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  - compute-0
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  - compute-1
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  - compute-2
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:49:58.390+0000 7f9571538640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: service_name: mgr
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: placement:
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  hosts:
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  - compute-0
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  - compute-1
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  - compute-2
Dec  7 14:49:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:49:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec  7 14:49:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: Deploying daemon crash.compute-1 on compute-1
Dec  7 14:49:59 np0005549633 ceph-mon[74384]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  7 14:50:00 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:00 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 1bd916ab-37d1-490d-88b5-71d4e41a6c7b (Updating crash deployment (+1 -> 2))
Dec  7 14:50:00 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 1bd916ab-37d1-490d-88b5-71d4e41a6c7b (Updating crash deployment (+1 -> 2)) in 2 seconds
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:50:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:50:01 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 2 completed events
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:50:01 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:01 np0005549633 podman[80790]: 2025-12-07 19:50:01.514240344 +0000 UTC m=+0.056863854 container create 925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 14:50:01 np0005549633 systemd[1]: Started libpod-conmon-925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f.scope.
Dec  7 14:50:01 np0005549633 podman[80790]: 2025-12-07 19:50:01.486530013 +0000 UTC m=+0.029153603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:01 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:01 np0005549633 podman[80790]: 2025-12-07 19:50:01.613980818 +0000 UTC m=+0.156604408 container init 925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:50:01 np0005549633 podman[80790]: 2025-12-07 19:50:01.620410388 +0000 UTC m=+0.163033898 container start 925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jones, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:50:01 np0005549633 podman[80790]: 2025-12-07 19:50:01.624085072 +0000 UTC m=+0.166708582 container attach 925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jones, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 14:50:01 np0005549633 festive_jones[80806]: 167 167
Dec  7 14:50:01 np0005549633 systemd[1]: libpod-925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f.scope: Deactivated successfully.
Dec  7 14:50:01 np0005549633 podman[80790]: 2025-12-07 19:50:01.628587099 +0000 UTC m=+0.171210619 container died 925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jones, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 14:50:01 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8d0c57a0545b095f26ffb2bf7c236b63c4ad4a82bbee1ad237b14c1968e70aef-merged.mount: Deactivated successfully.
Dec  7 14:50:01 np0005549633 podman[80790]: 2025-12-07 19:50:01.699603642 +0000 UTC m=+0.242227152 container remove 925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 14:50:01 np0005549633 systemd[1]: libpod-conmon-925a4f8202c76a7072ef997daecd41ba48189f294b58570185455ebb48d2af6f.scope: Deactivated successfully.
Dec  7 14:50:01 np0005549633 podman[80829]: 2025-12-07 19:50:01.911171129 +0000 UTC m=+0.051987467 container create 971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:50:01 np0005549633 systemd[1]: Started libpod-conmon-971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943.scope.
Dec  7 14:50:01 np0005549633 podman[80829]: 2025-12-07 19:50:01.88925733 +0000 UTC m=+0.030073668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:01 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60771a915ef69f62a7fd8e20c3db1b8b3d34e46bedcbc6cf05a19197f5aa8cfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60771a915ef69f62a7fd8e20c3db1b8b3d34e46bedcbc6cf05a19197f5aa8cfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60771a915ef69f62a7fd8e20c3db1b8b3d34e46bedcbc6cf05a19197f5aa8cfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60771a915ef69f62a7fd8e20c3db1b8b3d34e46bedcbc6cf05a19197f5aa8cfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60771a915ef69f62a7fd8e20c3db1b8b3d34e46bedcbc6cf05a19197f5aa8cfb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:02 np0005549633 podman[80829]: 2025-12-07 19:50:02.017348083 +0000 UTC m=+0.158164471 container init 971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 14:50:02 np0005549633 podman[80829]: 2025-12-07 19:50:02.034104606 +0000 UTC m=+0.174920934 container start 971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 14:50:02 np0005549633 podman[80829]: 2025-12-07 19:50:02.039218039 +0000 UTC m=+0.180034427 container attach 971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:02 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:02 np0005549633 adoring_mestorf[80846]: --> passed data devices: 0 physical, 1 LVM
Dec  7 14:50:02 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:02 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:02 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bde32eb9-6f67-49a9-82c5-0c88a97712bc
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "9ded133a-320c-4675-81aa-a6f018c479e6"} v 0)
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3235663688' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ded133a-320c-4675-81aa-a6f018c479e6"}]: dispatch
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3235663688' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9ded133a-320c-4675-81aa-a6f018c479e6"}]': finished
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:02 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc"} v 0)
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4128786835' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc"}]: dispatch
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4128786835' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc"}]': finished
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:03 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:03 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  7 14:50:03 np0005549633 lvm[80908]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:50:03 np0005549633 lvm[80908]: VG ceph_vg0 finished
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/162841291' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.101:0/3235663688' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9ded133a-320c-4675-81aa-a6f018c479e6"}]: dispatch
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.101:0/3235663688' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9ded133a-320c-4675-81aa-a6f018c479e6"}]': finished
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/4128786835' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc"}]: dispatch
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/4128786835' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc"}]': finished
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  7 14:50:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500307638' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: stderr: got monmap epoch 1
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: --> Creating keyring file for osd.1
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  7 14:50:03 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid bde32eb9-6f67-49a9-82c5-0c88a97712bc --setuser ceph --setgroup ceph
Dec  7 14:50:04 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:04 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  7 14:50:05 np0005549633 ceph-mon[74384]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  7 14:50:06 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:06 np0005549633 adoring_mestorf[80846]: stderr: 2025-12-07T19:50:03.875+0000 7fdd44c09740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec  7 14:50:06 np0005549633 adoring_mestorf[80846]: stderr: 2025-12-07T19:50:04.148+0000 7fdd44c09740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  7 14:50:06 np0005549633 adoring_mestorf[80846]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  7 14:50:06 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  7 14:50:06 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  7 14:50:07 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:07 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:07 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 14:50:07 np0005549633 adoring_mestorf[80846]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  7 14:50:07 np0005549633 adoring_mestorf[80846]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  7 14:50:07 np0005549633 adoring_mestorf[80846]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  7 14:50:07 np0005549633 systemd[1]: libpod-971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943.scope: Deactivated successfully.
Dec  7 14:50:07 np0005549633 systemd[1]: libpod-971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943.scope: Consumed 2.830s CPU time.
Dec  7 14:50:07 np0005549633 podman[81813]: 2025-12-07 19:50:07.321474496 +0000 UTC m=+0.042792298 container died 971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Dec  7 14:50:07 np0005549633 systemd[1]: var-lib-containers-storage-overlay-60771a915ef69f62a7fd8e20c3db1b8b3d34e46bedcbc6cf05a19197f5aa8cfb-merged.mount: Deactivated successfully.
Dec  7 14:50:07 np0005549633 podman[81813]: 2025-12-07 19:50:07.375957233 +0000 UTC m=+0.097274955 container remove 971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_mestorf, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:07 np0005549633 systemd[1]: libpod-conmon-971a72095f9f0477201afd0a842d843dade501fd5a08acb6c699c501f7add943.scope: Deactivated successfully.
Dec  7 14:50:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  7 14:50:08 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  7 14:50:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:50:08 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:50:08 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec  7 14:50:08 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec  7 14:50:08 np0005549633 podman[81917]: 2025-12-07 19:50:08.133945579 +0000 UTC m=+0.046211454 container create 74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_moser, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:50:08 np0005549633 systemd[1]: Started libpod-conmon-74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc.scope.
Dec  7 14:50:08 np0005549633 podman[81917]: 2025-12-07 19:50:08.112532015 +0000 UTC m=+0.024797860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:08 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:08 np0005549633 podman[81917]: 2025-12-07 19:50:08.25845503 +0000 UTC m=+0.170720935 container init 74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_moser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Dec  7 14:50:08 np0005549633 podman[81917]: 2025-12-07 19:50:08.271397435 +0000 UTC m=+0.183663290 container start 74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_moser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:08 np0005549633 podman[81917]: 2025-12-07 19:50:08.275770568 +0000 UTC m=+0.188036433 container attach 74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_moser, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  7 14:50:08 np0005549633 sweet_moser[81933]: 167 167
Dec  7 14:50:08 np0005549633 systemd[1]: libpod-74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc.scope: Deactivated successfully.
Dec  7 14:50:08 np0005549633 podman[81917]: 2025-12-07 19:50:08.279984207 +0000 UTC m=+0.192250112 container died 74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Dec  7 14:50:08 np0005549633 systemd[1]: var-lib-containers-storage-overlay-b8b68f6481c7d5748a5f2bbb5bfca5a77e602e1ed8c5404bc92c50b7e46ba7e3-merged.mount: Deactivated successfully.
Dec  7 14:50:08 np0005549633 podman[81917]: 2025-12-07 19:50:08.331160641 +0000 UTC m=+0.243426496 container remove 74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_moser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:50:08 np0005549633 systemd[1]: libpod-conmon-74cd93f05dd330a57d63d08a9fea40a2db3af85fd3b5311ec11c1f52da7897dc.scope: Deactivated successfully.
Dec  7 14:50:08 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:08 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  7 14:50:08 np0005549633 podman[81958]: 2025-12-07 19:50:08.570213073 +0000 UTC m=+0.069004748 container create a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:50:08 np0005549633 systemd[1]: Started libpod-conmon-a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7.scope.
Dec  7 14:50:08 np0005549633 podman[81958]: 2025-12-07 19:50:08.545541707 +0000 UTC m=+0.044333442 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:08 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:08 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196fcd2d810381759fbabd205e74b44b15030e23cc6c365304391114e222f08f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:08 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196fcd2d810381759fbabd205e74b44b15030e23cc6c365304391114e222f08f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:08 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196fcd2d810381759fbabd205e74b44b15030e23cc6c365304391114e222f08f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:08 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196fcd2d810381759fbabd205e74b44b15030e23cc6c365304391114e222f08f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:08 np0005549633 podman[81958]: 2025-12-07 19:50:08.70096828 +0000 UTC m=+0.199760045 container init a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:50:08 np0005549633 podman[81958]: 2025-12-07 19:50:08.715038637 +0000 UTC m=+0.213830322 container start a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  7 14:50:08 np0005549633 podman[81958]: 2025-12-07 19:50:08.718782953 +0000 UTC m=+0.217574708 container attach a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_matsumoto, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]: {
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:    "1": [
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:        {
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "devices": [
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "/dev/loop3"
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            ],
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "lv_name": "ceph_lv0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "lv_size": "21470642176",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a8ac706f-8288-541e-8e56-e1124d9b483d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bde32eb9-6f67-49a9-82c5-0c88a97712bc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "lv_uuid": "SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "name": "ceph_lv0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "tags": {
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.block_uuid": "SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.cephx_lockbox_secret": "",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.cluster_fsid": "a8ac706f-8288-541e-8e56-e1124d9b483d",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.cluster_name": "ceph",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.crush_device_class": "",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.encrypted": "0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.osd_fsid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.osd_id": "1",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.type": "block",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.vdo": "0",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:                "ceph.with_tpm": "0"
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            },
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "type": "block",
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:            "vg_name": "ceph_vg0"
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:        }
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]:    ]
Dec  7 14:50:09 np0005549633 compassionate_matsumoto[81975]: }
Dec  7 14:50:09 np0005549633 systemd[1]: libpod-a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7.scope: Deactivated successfully.
Dec  7 14:50:09 np0005549633 podman[81958]: 2025-12-07 19:50:09.068109974 +0000 UTC m=+0.566901659 container died a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:09 np0005549633 systemd[1]: var-lib-containers-storage-overlay-196fcd2d810381759fbabd205e74b44b15030e23cc6c365304391114e222f08f-merged.mount: Deactivated successfully.
Dec  7 14:50:09 np0005549633 podman[81958]: 2025-12-07 19:50:09.12543171 +0000 UTC m=+0.624223425 container remove a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:09 np0005549633 systemd[1]: libpod-conmon-a85459c39badfc0842fecb806b45580b9751b26eb77fbcd37efabcac42663dd7.scope: Deactivated successfully.
Dec  7 14:50:09 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  7 14:50:09 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  7 14:50:09 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:50:09 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:50:09 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  7 14:50:09 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  7 14:50:09 np0005549633 ceph-mon[74384]: Deploying daemon osd.0 on compute-1
Dec  7 14:50:09 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  7 14:50:09 np0005549633 podman[82087]: 2025-12-07 19:50:09.909327887 +0000 UTC m=+0.061050422 container create 4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wilbur, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:09 np0005549633 systemd[1]: Started libpod-conmon-4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c.scope.
Dec  7 14:50:09 np0005549633 podman[82087]: 2025-12-07 19:50:09.88104376 +0000 UTC m=+0.032766345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:09 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:10 np0005549633 podman[82087]: 2025-12-07 19:50:10.014147423 +0000 UTC m=+0.165869968 container init 4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:10 np0005549633 podman[82087]: 2025-12-07 19:50:10.024358731 +0000 UTC m=+0.176081246 container start 4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  7 14:50:10 np0005549633 podman[82087]: 2025-12-07 19:50:10.029373413 +0000 UTC m=+0.181095928 container attach 4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wilbur, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 14:50:10 np0005549633 goofy_wilbur[82103]: 167 167
Dec  7 14:50:10 np0005549633 systemd[1]: libpod-4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c.scope: Deactivated successfully.
Dec  7 14:50:10 np0005549633 podman[82087]: 2025-12-07 19:50:10.033355955 +0000 UTC m=+0.185078510 container died 4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  7 14:50:10 np0005549633 systemd[1]: var-lib-containers-storage-overlay-966f0ecce6e229d82485ba4234592f76305aace0bfbc03e387fc858ca905920c-merged.mount: Deactivated successfully.
Dec  7 14:50:10 np0005549633 podman[82087]: 2025-12-07 19:50:10.078606581 +0000 UTC m=+0.230329116 container remove 4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 14:50:10 np0005549633 systemd[1]: libpod-conmon-4d13e0f35920892795ec63aa90efdd3b1cd05551f9883040f66a0a506100434c.scope: Deactivated successfully.
Dec  7 14:50:10 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:10 np0005549633 podman[82133]: 2025-12-07 19:50:10.397449053 +0000 UTC m=+0.055968449 container create 8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 14:50:10 np0005549633 systemd[1]: Started libpod-conmon-8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4.scope.
Dec  7 14:50:10 np0005549633 podman[82133]: 2025-12-07 19:50:10.372692465 +0000 UTC m=+0.031211951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:10 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e433e052441d73f67963c6b2c2075a84d640a289c07fa1b0fa70f96dff55d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e433e052441d73f67963c6b2c2075a84d640a289c07fa1b0fa70f96dff55d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e433e052441d73f67963c6b2c2075a84d640a289c07fa1b0fa70f96dff55d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e433e052441d73f67963c6b2c2075a84d640a289c07fa1b0fa70f96dff55d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31e433e052441d73f67963c6b2c2075a84d640a289c07fa1b0fa70f96dff55d0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:10 np0005549633 podman[82133]: 2025-12-07 19:50:10.500276693 +0000 UTC m=+0.158796129 container init 8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 14:50:10 np0005549633 podman[82133]: 2025-12-07 19:50:10.511872511 +0000 UTC m=+0.170391907 container start 8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 14:50:10 np0005549633 podman[82133]: 2025-12-07 19:50:10.516898732 +0000 UTC m=+0.175418218 container attach 8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 14:50:10 np0005549633 ceph-mon[74384]: Deploying daemon osd.1 on compute-0
Dec  7 14:50:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test[82149]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  7 14:50:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test[82149]:                            [--no-systemd] [--no-tmpfs]
Dec  7 14:50:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test[82149]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  7 14:50:10 np0005549633 systemd[1]: libpod-8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4.scope: Deactivated successfully.
Dec  7 14:50:10 np0005549633 podman[82133]: 2025-12-07 19:50:10.692625528 +0000 UTC m=+0.351145014 container died 8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 14:50:10 np0005549633 systemd[1]: var-lib-containers-storage-overlay-31e433e052441d73f67963c6b2c2075a84d640a289c07fa1b0fa70f96dff55d0-merged.mount: Deactivated successfully.
Dec  7 14:50:10 np0005549633 podman[82133]: 2025-12-07 19:50:10.753223717 +0000 UTC m=+0.411743153 container remove 8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Dec  7 14:50:10 np0005549633 systemd[1]: libpod-conmon-8a57736ba6d9a5bfa40addf929bf8272ab34274e46f55b0ceeba54ea6ae6c6f4.scope: Deactivated successfully.
Dec  7 14:50:11 np0005549633 systemd[1]: Reloading.
Dec  7 14:50:11 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:50:11 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:50:11 np0005549633 python3[82209]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:50:11 np0005549633 podman[82247]: 2025-12-07 19:50:11.410696398 +0000 UTC m=+0.083752002 container create 75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2 (image=quay.io/ceph/ceph:v19, name=modest_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:50:11 np0005549633 systemd[1]: Started libpod-conmon-75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2.scope.
Dec  7 14:50:11 np0005549633 podman[82247]: 2025-12-07 19:50:11.375205627 +0000 UTC m=+0.048261301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:50:11 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:11 np0005549633 systemd[1]: Reloading.
Dec  7 14:50:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a08bdcad4fc5ae1123618d3ebd51a1cc2904aa21a10fcbfd6accb24028c114/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a08bdcad4fc5ae1123618d3ebd51a1cc2904aa21a10fcbfd6accb24028c114/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13a08bdcad4fc5ae1123618d3ebd51a1cc2904aa21a10fcbfd6accb24028c114/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:11 np0005549633 podman[82247]: 2025-12-07 19:50:11.520511355 +0000 UTC m=+0.193566959 container init 75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2 (image=quay.io/ceph/ceph:v19, name=modest_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 14:50:11 np0005549633 podman[82247]: 2025-12-07 19:50:11.53023717 +0000 UTC m=+0.203292754 container start 75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2 (image=quay.io/ceph/ceph:v19, name=modest_jackson, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:11 np0005549633 podman[82247]: 2025-12-07 19:50:11.534114769 +0000 UTC m=+0.207170473 container attach 75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2 (image=quay.io/ceph/ceph:v19, name=modest_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  7 14:50:11 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:50:11 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:50:11 np0005549633 systemd[1]: Starting Ceph osd.1 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:50:11 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 14:50:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2647489626' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 14:50:11 np0005549633 modest_jackson[82268]: 
Dec  7 14:50:11 np0005549633 modest_jackson[82268]: {"fsid":"a8ac706f-8288-541e-8e56-e1124d9b483d","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":94,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1765137003,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-07T19:48:35:442933+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-07T19:49:58.023674+0000","services":{}},"progress_events":{}}
Dec  7 14:50:12 np0005549633 systemd[1]: libpod-75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2.scope: Deactivated successfully.
Dec  7 14:50:12 np0005549633 podman[82247]: 2025-12-07 19:50:12.016710278 +0000 UTC m=+0.689765902 container died 75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2 (image=quay.io/ceph/ceph:v19, name=modest_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 14:50:12 np0005549633 podman[82377]: 2025-12-07 19:50:12.031888007 +0000 UTC m=+0.060641961 container create 4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:12 np0005549633 systemd[1]: var-lib-containers-storage-overlay-13a08bdcad4fc5ae1123618d3ebd51a1cc2904aa21a10fcbfd6accb24028c114-merged.mount: Deactivated successfully.
Dec  7 14:50:12 np0005549633 podman[82247]: 2025-12-07 19:50:12.070582988 +0000 UTC m=+0.743638572 container remove 75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2 (image=quay.io/ceph/ceph:v19, name=modest_jackson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  7 14:50:12 np0005549633 systemd[1]: libpod-conmon-75166f3bdd8b93b2e83e7e44926565963f0c0f6c49c8e153d1f39d19dbaac1d2.scope: Deactivated successfully.
Dec  7 14:50:12 np0005549633 podman[82377]: 2025-12-07 19:50:11.997839757 +0000 UTC m=+0.026593781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:12 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d64ea539136e5ffed2996fb998b213140efb65c9063706bb5f0714e07be5e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d64ea539136e5ffed2996fb998b213140efb65c9063706bb5f0714e07be5e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d64ea539136e5ffed2996fb998b213140efb65c9063706bb5f0714e07be5e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d64ea539136e5ffed2996fb998b213140efb65c9063706bb5f0714e07be5e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:12 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d64ea539136e5ffed2996fb998b213140efb65c9063706bb5f0714e07be5e0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:12 np0005549633 podman[82377]: 2025-12-07 19:50:12.115847384 +0000 UTC m=+0.144601338 container init 4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:50:12 np0005549633 podman[82377]: 2025-12-07 19:50:12.124316523 +0000 UTC m=+0.153070507 container start 4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 14:50:12 np0005549633 podman[82377]: 2025-12-07 19:50:12.130092106 +0000 UTC m=+0.158846090 container attach 4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:50:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:50:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:12 np0005549633 bash[82377]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:12 np0005549633 bash[82377]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:12 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:12 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:12 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:13 np0005549633 lvm[82485]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:50:13 np0005549633 lvm[82485]: VG ceph_vg0 finished
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:13 np0005549633 bash[82377]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  7 14:50:13 np0005549633 bash[82377]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  7 14:50:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate[82404]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  7 14:50:13 np0005549633 bash[82377]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  7 14:50:13 np0005549633 systemd[1]: libpod-4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da.scope: Deactivated successfully.
Dec  7 14:50:13 np0005549633 podman[82377]: 2025-12-07 19:50:13.653263472 +0000 UTC m=+1.682017456 container died 4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:50:13 np0005549633 systemd[1]: libpod-4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da.scope: Consumed 1.897s CPU time.
Dec  7 14:50:13 np0005549633 systemd[1]: var-lib-containers-storage-overlay-c2d64ea539136e5ffed2996fb998b213140efb65c9063706bb5f0714e07be5e0-merged.mount: Deactivated successfully.
Dec  7 14:50:13 np0005549633 podman[82377]: 2025-12-07 19:50:13.748791925 +0000 UTC m=+1.777545919 container remove 4405f6defb489682f0e9140707ba6eae74a6e4fecf68f9cb474d1378e97933da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:50:14 np0005549633 podman[82652]: 2025-12-07 19:50:14.05815328 +0000 UTC m=+0.080626184 container create 1c564ba15e3682e18d7e7c5b2f6f71fbf9106cb4c6e0360946fb6a71bcc4e2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 14:50:14 np0005549633 podman[82652]: 2025-12-07 19:50:14.005801794 +0000 UTC m=+0.028274678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8f0b60a8e5b364f07982fa39c4fe8f44baf9abce96221c2c0e8b1c1c1c3910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8f0b60a8e5b364f07982fa39c4fe8f44baf9abce96221c2c0e8b1c1c1c3910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8f0b60a8e5b364f07982fa39c4fe8f44baf9abce96221c2c0e8b1c1c1c3910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8f0b60a8e5b364f07982fa39c4fe8f44baf9abce96221c2c0e8b1c1c1c3910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b8f0b60a8e5b364f07982fa39c4fe8f44baf9abce96221c2c0e8b1c1c1c3910/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:14 np0005549633 podman[82652]: 2025-12-07 19:50:14.145383271 +0000 UTC m=+0.167856185 container init 1c564ba15e3682e18d7e7c5b2f6f71fbf9106cb4c6e0360946fb6a71bcc4e2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 14:50:14 np0005549633 podman[82652]: 2025-12-07 19:50:14.157768169 +0000 UTC m=+0.180241063 container start 1c564ba15e3682e18d7e7c5b2f6f71fbf9106cb4c6e0360946fb6a71bcc4e2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:50:14 np0005549633 bash[82652]: 1c564ba15e3682e18d7e7c5b2f6f71fbf9106cb4c6e0360946fb6a71bcc4e2a0
Dec  7 14:50:14 np0005549633 systemd[1]: Started Ceph osd.1 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: pidfile_write: ignore empty --pid-file
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:14 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:14 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec  7 14:50:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  7 14:50:14 np0005549633 podman[82782]: 2025-12-07 19:50:14.938427075 +0000 UTC m=+0.049360343 container create 2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:14 np0005549633 systemd[1]: Started libpod-conmon-2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3.scope.
Dec  7 14:50:15 np0005549633 podman[82782]: 2025-12-07 19:50:14.913148382 +0000 UTC m=+0.024081740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:15 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:15 np0005549633 podman[82782]: 2025-12-07 19:50:15.065441417 +0000 UTC m=+0.176374705 container init 2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_allen, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:15 np0005549633 podman[82782]: 2025-12-07 19:50:15.074582555 +0000 UTC m=+0.185515833 container start 2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:15 np0005549633 podman[82782]: 2025-12-07 19:50:15.079195866 +0000 UTC m=+0.190129214 container attach 2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_allen, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  7 14:50:15 np0005549633 nice_allen[82798]: 167 167
Dec  7 14:50:15 np0005549633 systemd[1]: libpod-2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3.scope: Deactivated successfully.
Dec  7 14:50:15 np0005549633 conmon[82798]: conmon 2e78a3ba2a7faca655c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3.scope/container/memory.events
Dec  7 14:50:15 np0005549633 podman[82782]: 2025-12-07 19:50:15.086097199 +0000 UTC m=+0.197030507 container died 2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_allen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:15 np0005549633 systemd[1]: var-lib-containers-storage-overlay-36431402aac5b882f0db6e1b0bf3edced4f5ab104f7300b0784c5c80887ab867-merged.mount: Deactivated successfully.
Dec  7 14:50:15 np0005549633 podman[82782]: 2025-12-07 19:50:15.144170028 +0000 UTC m=+0.255103306 container remove 2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:50:15 np0005549633 systemd[1]: libpod-conmon-2e78a3ba2a7faca655c77f3143a9b55357cb3ae28c8237f6a461d5d513ca6de3.scope: Deactivated successfully.
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:15 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:15 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:15 np0005549633 podman[82823]: 2025-12-07 19:50:15.381417748 +0000 UTC m=+0.073811482 container create efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_khayyam, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0e4b1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:15 np0005549633 systemd[1]: Started libpod-conmon-efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1.scope.
Dec  7 14:50:15 np0005549633 podman[82823]: 2025-12-07 19:50:15.352861183 +0000 UTC m=+0.045254967 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:15 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7044f2d32f03649cc9bf007e367968b4d94059463ebb1d842e7c687db5757484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7044f2d32f03649cc9bf007e367968b4d94059463ebb1d842e7c687db5757484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7044f2d32f03649cc9bf007e367968b4d94059463ebb1d842e7c687db5757484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:15 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7044f2d32f03649cc9bf007e367968b4d94059463ebb1d842e7c687db5757484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:15 np0005549633 podman[82823]: 2025-12-07 19:50:15.500654441 +0000 UTC m=+0.193048175 container init efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_khayyam, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:50:15 np0005549633 podman[82823]: 2025-12-07 19:50:15.515850509 +0000 UTC m=+0.208244233 container start efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:50:15 np0005549633 podman[82823]: 2025-12-07 19:50:15.519852162 +0000 UTC m=+0.212245886 container attach efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: load: jerasure load: lrc 
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 14:50:15 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:16 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:16 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  7 14:50:16 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:16 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:16 np0005549633 lvm[82934]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:50:16 np0005549633 lvm[82934]: VG ceph_vg0 finished
Dec  7 14:50:16 np0005549633 confident_khayyam[82842]: {}
Dec  7 14:50:16 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:16 np0005549633 systemd[1]: libpod-efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1.scope: Deactivated successfully.
Dec  7 14:50:16 np0005549633 systemd[1]: libpod-efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1.scope: Consumed 1.591s CPU time.
Dec  7 14:50:16 np0005549633 podman[82823]: 2025-12-07 19:50:16.420500362 +0000 UTC m=+1.112894076 container died efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:16 np0005549633 systemd[1]: var-lib-containers-storage-overlay-7044f2d32f03649cc9bf007e367968b4d94059463ebb1d842e7c687db5757484-merged.mount: Deactivated successfully.
Dec  7 14:50:16 np0005549633 podman[82823]: 2025-12-07 19:50:16.472529139 +0000 UTC m=+1.164922823 container remove efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 14:50:16 np0005549633 systemd[1]: libpod-conmon-efc66a57d97abd88bb03dd5a208e336dfe630052ab722d016d32bc9ce32dd6c1.scope: Deactivated successfully.
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount shared_bdev_used = 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: RocksDB version: 7.9.2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Git sha 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: DB SUMMARY
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: DB Session ID:  AZ9BVCBB00NUB4AM4UPV
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: CURRENT file:  CURRENT
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.error_if_exists: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.create_if_missing: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                     Options.env: 0x563b0f31ddc0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                Options.info_log: 0x563b0f3217a0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                              Options.statistics: (nil)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.use_fsync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                              Options.db_log_dir: 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                 Options.wal_dir: db.wal
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.write_buffer_manager: 0x563b0f418a00
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.unordered_write: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.row_cache: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                              Options.wal_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.two_write_queues: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.wal_compression: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.atomic_flush: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_background_jobs: 4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_background_compactions: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_subcompactions: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.max_open_files: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Compression algorithms supported:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kZSTD supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kXpressCompression supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kZlibCompression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e5469b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e5469b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e5469b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3c089663-c280-454a-97d4-9a54e37ea45b
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137016577577, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137016578014, "job": 1, "event": "recovery_finished"}
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: freelist init
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: freelist _read_cfg
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs umount
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) close
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bdev(0x563b0f34d000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluefs mount shared_bdev_used = 4718592
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: RocksDB version: 7.9.2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Git sha 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: DB SUMMARY
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: DB Session ID:  AZ9BVCBB00NUB4AM4UPU
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: CURRENT file:  CURRENT
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: IDENTITY file:  IDENTITY
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.error_if_exists: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.create_if_missing: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.paranoid_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                     Options.env: 0x563b0f4bc2a0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                Options.info_log: 0x563b0f321920
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_file_opening_threads: 16
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                              Options.statistics: (nil)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.use_fsync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.max_log_file_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.allow_fallocate: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.use_direct_reads: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.create_missing_column_families: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                              Options.db_log_dir: 
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                                 Options.wal_dir: db.wal
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.advise_random_on_open: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.write_buffer_manager: 0x563b0f418c80
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                            Options.rate_limiter: (nil)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.unordered_write: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.row_cache: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                              Options.wal_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.allow_ingest_behind: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.two_write_queues: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.manual_wal_flush: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.wal_compression: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.atomic_flush: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.log_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.allow_data_in_errors: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.db_host_id: __hostname__
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_background_jobs: 4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_background_compactions: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_subcompactions: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.max_open_files: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.bytes_per_sync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.max_background_flushes: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Compression algorithms supported:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kZSTD supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kXpressCompression supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kBZip2Compression supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kLZ4Compression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kZlibCompression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: #011kSnappyCompression supported: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e547350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e5469b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e5469b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:           Options.merge_operator: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.compaction_filter_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.sst_partitioner_factory: None
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563b0f321ac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563b0e5469b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.write_buffer_size: 16777216
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.max_write_buffer_number: 64
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.compression: LZ4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.num_levels: 7
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.level: 32767
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.compression_opts.strategy: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                  Options.compression_opts.enabled: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.arena_block_size: 1048576
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.disable_auto_compactions: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.inplace_update_support: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.bloom_locality: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                    Options.max_successive_merges: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.paranoid_file_checks: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.force_consistency_checks: 1
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.report_bg_io_stats: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                               Options.ttl: 2592000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                       Options.enable_blob_files: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                           Options.min_blob_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                          Options.blob_file_size: 268435456
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb:                Options.blob_file_starting_level: 0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3c089663-c280-454a-97d4-9a54e37ea45b
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137016880976, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137016885615, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765137016, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3c089663-c280-454a-97d4-9a54e37ea45b", "db_session_id": "AZ9BVCBB00NUB4AM4UPU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137016893788, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765137016, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3c089663-c280-454a-97d4-9a54e37ea45b", "db_session_id": "AZ9BVCBB00NUB4AM4UPU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137016897510, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765137016, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3c089663-c280-454a-97d4-9a54e37ea45b", "db_session_id": "AZ9BVCBB00NUB4AM4UPU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137016899139, "job": 1, "event": "recovery_finished"}
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563b0f530000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: DB pointer 0x563b0f4c8000
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b0e547350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b0e547350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563b0e547350#2 capacity: 460.80 MB usag
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: _get_class not permitted to load lua
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: _get_class not permitted to load sdk
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1 0 load_pgs
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1 0 load_pgs opened 0 pgs
Dec  7 14:50:16 np0005549633 ceph-osd[82672]: osd.1 0 log_to_monitors true
Dec  7 14:50:16 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1[82668]: 2025-12-07T19:50:16.933+0000 7f7e15e38740 -1 osd.1 0 log_to_monitors true
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec  7 14:50:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  7 14:50:17 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: from='osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:17 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e8 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:17 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:17 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:17 np0005549633 podman[83503]: 2025-12-07 19:50:17.735444836 +0000 UTC m=+0.087814128 container exec a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 14:50:17 np0005549633 podman[83503]: 2025-12-07 19:50:17.828228512 +0000 UTC m=+0.180597734 container exec_died a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 14:50:17 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  7 14:50:17 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  7 14:50:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:18 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:18 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:18 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e9 e9: 2 total, 0 up, 2 in
Dec  7 14:50:18 np0005549633 ceph-osd[82672]: osd.1 0 done with init, starting boot process
Dec  7 14:50:18 np0005549633 ceph-osd[82672]: osd.1 0 start_boot
Dec  7 14:50:18 np0005549633 ceph-osd[82672]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  7 14:50:18 np0005549633 ceph-osd[82672]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  7 14:50:18 np0005549633 ceph-osd[82672]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  7 14:50:18 np0005549633 ceph-osd[82672]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  7 14:50:18 np0005549633 ceph-osd[82672]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 0 up, 2 in
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:18 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:18 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3681996851; not ready for session (expect reconnect)
Dec  7 14:50:18 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:19 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:19 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:19 np0005549633 ceph-mon[74384]: from='osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  7 14:50:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:50:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:19 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3681996851; not ready for session (expect reconnect)
Dec  7 14:50:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:19 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:20 np0005549633 podman[83755]: 2025-12-07 19:50:20.019922401 +0000 UTC m=+0.084582396 container create 65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 14:50:20 np0005549633 podman[83755]: 2025-12-07 19:50:19.97448881 +0000 UTC m=+0.039148795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:20 np0005549633 systemd[1]: Started libpod-conmon-65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6.scope.
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:20 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:20 np0005549633 podman[83755]: 2025-12-07 19:50:20.211090512 +0000 UTC m=+0.275750577 container init 65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 14:50:20 np0005549633 podman[83755]: 2025-12-07 19:50:20.233689989 +0000 UTC m=+0.298350004 container start 65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 14:50:20 np0005549633 podman[83755]: 2025-12-07 19:50:20.238767093 +0000 UTC m=+0.303427158 container attach 65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:50:20 np0005549633 systemd[1]: libpod-65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6.scope: Deactivated successfully.
Dec  7 14:50:20 np0005549633 distracted_hermann[83772]: 167 167
Dec  7 14:50:20 np0005549633 podman[83755]: 2025-12-07 19:50:20.253317693 +0000 UTC m=+0.317977718 container died 65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 14:50:20 np0005549633 conmon[83772]: conmon 65d1f1a6062787b0f3d1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6.scope/container/memory.events
Dec  7 14:50:20 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:20 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:20 np0005549633 systemd[1]: var-lib-containers-storage-overlay-5b454742972d970e9e33d131909f5f53d6a98e74faf3785a1b435f4eb23ce8ac-merged.mount: Deactivated successfully.
Dec  7 14:50:20 np0005549633 podman[83755]: 2025-12-07 19:50:20.376355073 +0000 UTC m=+0.441015078 container remove 65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hermann, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  7 14:50:20 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:20 np0005549633 systemd[1]: libpod-conmon-65d1f1a6062787b0f3d1e1f77b114f33c0990b45c29a5da3a8b78d58253e2bf6.scope: Deactivated successfully.
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:20 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3681996851; not ready for session (expect reconnect)
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:20 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:20 np0005549633 podman[83796]: 2025-12-07 19:50:20.816057883 +0000 UTC m=+0.112694880 container create afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_leavitt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Dec  7 14:50:20 np0005549633 podman[83796]: 2025-12-07 19:50:20.786226792 +0000 UTC m=+0.082863899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:50:20 np0005549633 systemd[1]: Started libpod-conmon-afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e.scope.
Dec  7 14:50:21 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8e73425c119841cdc4149f24cb235fb5b96ef940e780bac4da992dbc343222/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8e73425c119841cdc4149f24cb235fb5b96ef940e780bac4da992dbc343222/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8e73425c119841cdc4149f24cb235fb5b96ef940e780bac4da992dbc343222/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8e73425c119841cdc4149f24cb235fb5b96ef940e780bac4da992dbc343222/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:21 np0005549633 podman[83796]: 2025-12-07 19:50:21.204036764 +0000 UTC m=+0.500673771 container init afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 14:50:21 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:21 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:21 np0005549633 podman[83796]: 2025-12-07 19:50:21.380795169 +0000 UTC m=+0.677432176 container start afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 14:50:21 np0005549633 podman[83796]: 2025-12-07 19:50:21.41167828 +0000 UTC m=+0.708315307 container attach afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 14:50:21 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3681996851; not ready for session (expect reconnect)
Dec  7 14:50:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:21 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:50:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]: [
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:    {
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "available": false,
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "being_replaced": false,
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "ceph_device_lvm": false,
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "lsm_data": {},
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "lvs": [],
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "path": "/dev/sr0",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "rejected_reasons": [
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "Insufficient space (<5GB)",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "Has a FileSystem"
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        ],
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        "sys_api": {
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "actuators": null,
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "device_nodes": [
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:                "sr0"
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            ],
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "devname": "sr0",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "human_readable_size": "482.00 KB",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "id_bus": "ata",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "model": "QEMU DVD-ROM",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "nr_requests": "2",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "parent": "/dev/sr0",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "partitions": {},
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "path": "/dev/sr0",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "removable": "1",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "rev": "2.5+",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "ro": "0",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "rotational": "1",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "sas_address": "",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "sas_device_handle": "",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "scheduler_mode": "mq-deadline",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "sectors": 0,
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "sectorsize": "2048",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "size": 493568.0,
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "support_discard": "2048",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "type": "disk",
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:            "vendor": "QEMU"
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:        }
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]:    }
Dec  7 14:50:22 np0005549633 adoring_leavitt[83812]: ]
Dec  7 14:50:22 np0005549633 systemd[1]: libpod-afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e.scope: Deactivated successfully.
Dec  7 14:50:22 np0005549633 podman[83796]: 2025-12-07 19:50:22.382668514 +0000 UTC m=+1.679305581 container died afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 14:50:22 np0005549633 systemd[1]: libpod-afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e.scope: Consumed 1.056s CPU time.
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  7 14:50:22 np0005549633 systemd[1]: var-lib-containers-storage-overlay-1e8e73425c119841cdc4149f24cb235fb5b96ef940e780bac4da992dbc343222-merged.mount: Deactivated successfully.
Dec  7 14:50:22 np0005549633 podman[83796]: 2025-12-07 19:50:22.510331764 +0000 UTC m=+1.806968781 container remove afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_leavitt, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:50:22 np0005549633 systemd[1]: libpod-conmon-afc4a3219b91bd4d88a3c1667a17197e322c645ce3c2e3aaafd97e203643a95e.scope: Deactivated successfully.
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3681996851; not ready for session (expect reconnect)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e9 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:50:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:50:22 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:50:23 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3280358460; not ready for session (expect reconnect)
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:23 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  7 14:50:23 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3681996851; not ready for session (expect reconnect)
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:23 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-1 to  5247M
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:50:23 np0005549633 ceph-mon[74384]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 19.232 iops: 4923.371 elapsed_sec: 0.609
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: log_channel(cluster) log [WRN] : OSD bench result of 4923.371303 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 0 waiting for initial osdmap
Dec  7 14:50:24 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1[82668]: 2025-12-07T19:50:24.063+0000 7f7e11dbb640 -1 osd.1 0 waiting for initial osdmap
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 9 check_osdmap_features require_osd_release unknown -> squid
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  7 14:50:24 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-osd-1[82668]: 2025-12-07T19:50:24.088+0000 7f7e0d3e3640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 9 set_numa_affinity not setting numa affinity
Dec  7 14:50:24 np0005549633 ceph-osd[82672]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460] boot
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:24 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:24 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] creating mgr pool
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  7 14:50:24 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  7 14:50:24 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/3681996851; not ready for session (expect reconnect)
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:24 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: OSD bench result of 3537.665288 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: osd.0 [v2:192.168.122.101:6800/3280358460,v1:192.168.122.101:6801/3280358460] boot
Dec  7 14:50:24 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:50:25 np0005549633 ceph-osd[82672]: osd.1 9 tick checking mon for new map
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851] boot
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  7 14:50:25 np0005549633 ceph-osd[82672]: osd.1 11 state: booting -> active
Dec  7 14:50:25 np0005549633 ceph-osd[82672]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  7 14:50:25 np0005549633 ceph-osd[82672]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  7 14:50:25 np0005549633 ceph-osd[82672]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  7 14:50:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 11 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: OSD bench result of 4923.371303 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: osd.1 [v2:192.168.122.100:6802/3681996851,v1:192.168.122.100:6803/3681996851] boot
Dec  7 14:50:25 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:50:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=11/12 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=11) [1] r=0 lpr=11 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] creating main.db for devicehealth
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 14:50:26 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Check health
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 14:50:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 14:50:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  7 14:50:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec  7 14:50:27 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec  7 14:50:27 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  7 14:50:27 np0005549633 ceph-mon[74384]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  7 14:50:27 np0005549633 ceph-mon[74384]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  7 14:50:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:28 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dyzcyj(active, since 92s)
Dec  7 14:50:28 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 14:50:30 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  7 14:50:32 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:34 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:36 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:38 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:40 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:42 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:42 np0005549633 python3[84945]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:50:42 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:50:42 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:50:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:42 np0005549633 podman[84947]: 2025-12-07 19:50:42.823356141 +0000 UTC m=+0.168122182 container create 1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488 (image=quay.io/ceph/ceph:v19, name=vibrant_bouman, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 14:50:42 np0005549633 systemd[1]: Started libpod-conmon-1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488.scope.
Dec  7 14:50:42 np0005549633 podman[84947]: 2025-12-07 19:50:42.7879048 +0000 UTC m=+0.132670911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:50:42 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:42 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c58b2da3396426f7e80e659159ca2a92536ece8f0adae14c72e9e5ae12ef4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:42 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c58b2da3396426f7e80e659159ca2a92536ece8f0adae14c72e9e5ae12ef4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:42 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/239c58b2da3396426f7e80e659159ca2a92536ece8f0adae14c72e9e5ae12ef4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:42 np0005549633 podman[84947]: 2025-12-07 19:50:42.934440653 +0000 UTC m=+0.279206724 container init 1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488 (image=quay.io/ceph/ceph:v19, name=vibrant_bouman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:50:42 np0005549633 podman[84947]: 2025-12-07 19:50:42.964218113 +0000 UTC m=+0.308984174 container start 1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488 (image=quay.io/ceph/ceph:v19, name=vibrant_bouman, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 14:50:42 np0005549633 podman[84947]: 2025-12-07 19:50:42.969368498 +0000 UTC m=+0.314134609 container attach 1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488 (image=quay.io/ceph/ceph:v19, name=vibrant_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 14:50:43 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:50:43 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 14:50:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3599487357' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 14:50:43 np0005549633 vibrant_bouman[84963]: 
Dec  7 14:50:43 np0005549633 vibrant_bouman[84963]: {"fsid":"a8ac706f-8288-541e-8e56-e1124d9b483d","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":126,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1765137025,"num_in_osds":2,"osd_in_since":1765137003,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":475234304,"bytes_avail":42466050048,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-12-07T19:48:35:442933+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-07T19:49:58.023674+0000","services":{}},"progress_events":{}}
Dec  7 14:50:43 np0005549633 systemd[1]: libpod-1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488.scope: Deactivated successfully.
Dec  7 14:50:43 np0005549633 podman[84988]: 2025-12-07 19:50:43.834663531 +0000 UTC m=+0.085737789 container died 1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488 (image=quay.io/ceph/ceph:v19, name=vibrant_bouman, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:50:43 np0005549633 systemd[1]: var-lib-containers-storage-overlay-239c58b2da3396426f7e80e659159ca2a92536ece8f0adae14c72e9e5ae12ef4-merged.mount: Deactivated successfully.
Dec  7 14:50:43 np0005549633 podman[84988]: 2025-12-07 19:50:43.900306772 +0000 UTC m=+0.151380990 container remove 1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488 (image=quay.io/ceph/ceph:v19, name=vibrant_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 14:50:43 np0005549633 systemd[1]: libpod-conmon-1965d4f91dd8f8811d03d01db076d6295b6ed7279e9c95184b926bac3fe04488.scope: Deactivated successfully.
Dec  7 14:50:44 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:50:44 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:50:44 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:44 np0005549633 python3[85029]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:50:44 np0005549633 podman[85030]: 2025-12-07 19:50:44.561387796 +0000 UTC m=+0.076235711 container create 817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61 (image=quay.io/ceph/ceph:v19, name=blissful_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 14:50:44 np0005549633 systemd[1]: Started libpod-conmon-817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61.scope.
Dec  7 14:50:44 np0005549633 podman[85030]: 2025-12-07 19:50:44.532218663 +0000 UTC m=+0.047066618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:50:44 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:44 np0005549633 ceph-mon[74384]: Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:50:44 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fa4660b182d7d41bf41e6dfe2b9bd56c0e37170817df48cd9657e3d1486b4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:44 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fa4660b182d7d41bf41e6dfe2b9bd56c0e37170817df48cd9657e3d1486b4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:44 np0005549633 podman[85030]: 2025-12-07 19:50:44.663280909 +0000 UTC m=+0.178128854 container init 817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61 (image=quay.io/ceph/ceph:v19, name=blissful_gauss, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 14:50:44 np0005549633 podman[85030]: 2025-12-07 19:50:44.672063527 +0000 UTC m=+0.186911442 container start 817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61 (image=quay.io/ceph/ceph:v19, name=blissful_gauss, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:44 np0005549633 podman[85030]: 2025-12-07 19:50:44.676669916 +0000 UTC m=+0.191517831 container attach 817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61 (image=quay.io/ceph/ceph:v19, name=blissful_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 14:50:44 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:50:44 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/673397716' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:45 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:45 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev a1794b73-9a23-42d8-88c7-5b177c2d710d (Updating mon deployment (+2 -> 3))
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:50:45 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec  7 14:50:45 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/673397716' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/673397716' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec  7 14:50:45 np0005549633 blissful_gauss[85045]: pool 'vms' created
Dec  7 14:50:45 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec  7 14:50:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 14 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:50:45 np0005549633 systemd[1]: libpod-817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61.scope: Deactivated successfully.
Dec  7 14:50:45 np0005549633 podman[85030]: 2025-12-07 19:50:45.684691434 +0000 UTC m=+1.199539349 container died 817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61 (image=quay.io/ceph/ceph:v19, name=blissful_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 14:50:45 np0005549633 systemd[1]: var-lib-containers-storage-overlay-d4fa4660b182d7d41bf41e6dfe2b9bd56c0e37170817df48cd9657e3d1486b4e-merged.mount: Deactivated successfully.
Dec  7 14:50:45 np0005549633 podman[85030]: 2025-12-07 19:50:45.736465064 +0000 UTC m=+1.251312949 container remove 817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61 (image=quay.io/ceph/ceph:v19, name=blissful_gauss, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:50:45 np0005549633 systemd[1]: libpod-conmon-817c3a509a6eb2e37718f4cf33b309c5eac0ba0702cb8b94b22ebb08400c4f61.scope: Deactivated successfully.
Dec  7 14:50:46 np0005549633 python3[85108]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:50:46 np0005549633 ceph-mgr[74680]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec  7 14:50:46 np0005549633 podman[85109]: 2025-12-07 19:50:46.213501477 +0000 UTC m=+0.054748064 container create a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0 (image=quay.io/ceph/ceph:v19, name=nice_darwin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  7 14:50:46 np0005549633 systemd[1]: Started libpod-conmon-a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0.scope.
Dec  7 14:50:46 np0005549633 podman[85109]: 2025-12-07 19:50:46.18805639 +0000 UTC m=+0.029303007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:50:46 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:46 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3528475c6b149bb0a1c6d71ab7740411b22624be66fadbd8a51c62f1c5b16d64/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:46 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3528475c6b149bb0a1c6d71ab7740411b22624be66fadbd8a51c62f1c5b16d64/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:46 np0005549633 podman[85109]: 2025-12-07 19:50:46.310299757 +0000 UTC m=+0.151546324 container init a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0 (image=quay.io/ceph/ceph:v19, name=nice_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 14:50:46 np0005549633 podman[85109]: 2025-12-07 19:50:46.316946965 +0000 UTC m=+0.158193532 container start a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0 (image=quay.io/ceph/ceph:v19, name=nice_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:46 np0005549633 podman[85109]: 2025-12-07 19:50:46.3206695 +0000 UTC m=+0.161916097 container attach a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0 (image=quay.io/ceph/ceph:v19, name=nice_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: Deploying daemon mon.compute-2 on compute-2
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/673397716' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: Cluster is now healthy
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 14:50:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1565839540' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:50:46 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 15 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:50:47 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  7 14:50:47 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:50:47 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1565839540' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:50:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1565839540' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:50:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec  7 14:50:47 np0005549633 nice_darwin[85125]: pool 'volumes' created
Dec  7 14:50:47 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec  7 14:50:47 np0005549633 systemd[1]: libpod-a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0.scope: Deactivated successfully.
Dec  7 14:50:47 np0005549633 podman[85109]: 2025-12-07 19:50:47.729987504 +0000 UTC m=+1.571234101 container died a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0 (image=quay.io/ceph/ceph:v19, name=nice_darwin, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 14:50:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:50:47 np0005549633 systemd[1]: var-lib-containers-storage-overlay-3528475c6b149bb0a1c6d71ab7740411b22624be66fadbd8a51c62f1c5b16d64-merged.mount: Deactivated successfully.
Dec  7 14:50:47 np0005549633 podman[85109]: 2025-12-07 19:50:47.794864316 +0000 UTC m=+1.636110913 container remove a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0 (image=quay.io/ceph/ceph:v19, name=nice_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  7 14:50:47 np0005549633 systemd[1]: libpod-conmon-a80f004a709a3ecae1c6e88c66e3e2f8d0b94aa507968c5f5ed44aed4b65b1b0.scope: Deactivated successfully.
Dec  7 14:50:48 np0005549633 python3[85189]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:50:48 np0005549633 podman[85190]: 2025-12-07 19:50:48.287166012 +0000 UTC m=+0.152987985 container create 4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d (image=quay.io/ceph/ceph:v19, name=peaceful_black, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:50:48 np0005549633 podman[85190]: 2025-12-07 19:50:48.200235246 +0000 UTC m=+0.066057279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:50:48 np0005549633 systemd[1]: Started libpod-conmon-4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d.scope.
Dec  7 14:50:48 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6ab5674b7472d1f099942aeb68fde3cd48cbbbbfb4c2c368b305f5e5c24eeb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a6ab5674b7472d1f099942aeb68fde3cd48cbbbbfb4c2c368b305f5e5c24eeb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:48 np0005549633 podman[85190]: 2025-12-07 19:50:48.402109298 +0000 UTC m=+0.267931321 container init 4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d (image=quay.io/ceph/ceph:v19, name=peaceful_black, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:50:48 np0005549633 podman[85190]: 2025-12-07 19:50:48.409403814 +0000 UTC m=+0.275225747 container start 4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d (image=quay.io/ceph/ceph:v19, name=peaceful_black, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 14:50:48 np0005549633 podman[85190]: 2025-12-07 19:50:48.413211676 +0000 UTC m=+0.279033639 container attach 4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d (image=quay.io/ceph/ceph:v19, name=peaceful_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  7 14:50:48 np0005549633 ceph-mon[74384]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:50:48 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1565839540' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:50:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  7 14:50:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec  7 14:50:48 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec  7 14:50:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 14:50:48 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715576705' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:50:49 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  7 14:50:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/715576705' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:50:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Dec  7 14:50:49 np0005549633 peaceful_black[85205]: pool 'backups' created
Dec  7 14:50:49 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec  7 14:50:49 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/715576705' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:50:49 np0005549633 systemd[1]: libpod-4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d.scope: Deactivated successfully.
Dec  7 14:50:49 np0005549633 podman[85190]: 2025-12-07 19:50:49.748659427 +0000 UTC m=+1.614481370 container died 4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d (image=quay.io/ceph/ceph:v19, name=peaceful_black, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 14:50:49 np0005549633 systemd[1]: var-lib-containers-storage-overlay-5a6ab5674b7472d1f099942aeb68fde3cd48cbbbbfb4c2c368b305f5e5c24eeb-merged.mount: Deactivated successfully.
Dec  7 14:50:49 np0005549633 podman[85190]: 2025-12-07 19:50:49.809186235 +0000 UTC m=+1.675008168 container remove 4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d (image=quay.io/ceph/ceph:v19, name=peaceful_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 14:50:49 np0005549633 systemd[1]: libpod-conmon-4500e8e5e5ff198ed5a7a5c8e06a9c8b322e673acab05304048ac8b04962862d.scope: Deactivated successfully.
Dec  7 14:50:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  7 14:50:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:50:50 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec  7 14:50:50 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec  7 14:50:50 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:50 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:50 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:50 np0005549633 python3[85271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:50:50 np0005549633 podman[85272]: 2025-12-07 19:50:50.406247488 +0000 UTC m=+0.182967867 container create 3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f (image=quay.io/ceph/ceph:v19, name=boring_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:50:50 np0005549633 systemd[1]: Started libpod-conmon-3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f.scope.
Dec  7 14:50:50 np0005549633 podman[85272]: 2025-12-07 19:50:50.381448996 +0000 UTC m=+0.158169365 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:50:50 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:50:50 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ec1604a0e6bda464512819a1cb8872f53424a7be4f4efb0b1e41218cf3b73a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:50 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07ec1604a0e6bda464512819a1cb8872f53424a7be4f4efb0b1e41218cf3b73a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:50:50 np0005549633 podman[85272]: 2025-12-07 19:50:50.526614032 +0000 UTC m=+0.303334421 container init 3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f (image=quay.io/ceph/ceph:v19, name=boring_cohen, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:50:50 np0005549633 podman[85272]: 2025-12-07 19:50:50.544228169 +0000 UTC m=+0.320948528 container start 3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f (image=quay.io/ceph/ceph:v19, name=boring_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 14:50:50 np0005549633 podman[85272]: 2025-12-07 19:50:50.549100104 +0000 UTC m=+0.325820463 container attach 3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f (image=quay.io/ceph/ceph:v19, name=boring_cohen, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:51 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:51 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:51 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v69: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:52 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:52 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 14:50:52 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:52 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:52 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 14:50:53 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:53 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:53 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:53 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:53 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:53 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 14:50:53 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v70: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:54 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:54 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 14:50:54 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:54 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 14:50:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:55 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:55 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:50:55 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:55 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 14:50:55 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T19:50:50.063264+0000
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : created 2025-12-07T19:48:33.416686+0000
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dyzcyj(active, since 119s)
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  7 14:50:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:50:56
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [balancer INFO root] Some PGs (0.250000) are inactive; try again later
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:56 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 14:50:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3451380409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:50:57 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:57 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  7 14:50:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:50:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:57 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v72: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:50:57 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:57 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 14:50:58 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2927630988; not ready for session (expect reconnect)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  7 14:50:58 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:58 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: Deploying daemon mon.compute-1 on compute-1
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0 calling monitor election
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: Health detail: HEALTH_WARN 2 pool(s) do not have an application enabled
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: [WRN] POOL_APP_NOT_ENABLED: 2 pool(s) do not have an application enabled
Dec  7 14:50:58 np0005549633 ceph-mon[74384]:    application not enabled on pool 'vms'
Dec  7 14:50:58 np0005549633 ceph-mon[74384]:    application not enabled on pool 'volumes'
Dec  7 14:50:58 np0005549633 ceph-mon[74384]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(probing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec  7 14:50:58 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:50:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:50:59 np0005549633 ceph-mgr[74680]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec  7 14:50:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:50:59.066+0000 7f957f554640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec  7 14:50:59 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:50:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:50:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:50:59 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 14:50:59 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v74: 4 pgs: 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:00 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:51:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:51:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:51:00 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 14:51:01 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 1dd09a65-ec24-42d3-8e48-405753fb3ac3 (Global Recovery Event) in 15 seconds
Dec  7 14:51:01 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:51:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:51:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:51:01 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 14:51:01 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v75: 4 pgs: 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:02 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:51:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:51:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:51:02 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsid a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : last_changed 2025-12-07T19:50:56.175798+0000
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : created 2025-12-07T19:48:33.416686+0000
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap 
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.dyzcyj(active, since 2m)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev a1794b73-9a23-42d8-88c7-5b177c2d710d (Updating mon deployment (+2 -> 3))
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event a1794b73-9a23-42d8-88c7-5b177c2d710d (Updating mon deployment (+2 -> 3)) in 18 seconds
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 0ea38b15-864b-48cb-9261-d468afc6c4bb (Updating mgr deployment (+2 -> 3))
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.orbdku", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.orbdku", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3451380409' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Dec  7 14:51:03 np0005549633 boring_cohen[85287]: pool 'images' created
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 2f9a987b-2077-42d8-8f24-60e753490198 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3451380409' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0 calling monitor election
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-2 calling monitor election
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-1 calling monitor election
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Dec  7 14:51:03 np0005549633 ceph-mon[74384]:    application not enabled on pool 'vms'
Dec  7 14:51:03 np0005549633 ceph-mon[74384]:    application not enabled on pool 'volumes'
Dec  7 14:51:03 np0005549633 ceph-mon[74384]:    application not enabled on pool 'backups'
Dec  7 14:51:03 np0005549633 ceph-mon[74384]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.orbdku", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.orbdku", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.orbdku on compute-2
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.orbdku on compute-2
Dec  7 14:51:03 np0005549633 systemd[1]: libpod-3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f.scope: Deactivated successfully.
Dec  7 14:51:03 np0005549633 podman[85272]: 2025-12-07 19:51:03.37101569 +0000 UTC m=+13.147736059 container died 3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f (image=quay.io/ceph/ceph:v19, name=boring_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:03 np0005549633 systemd[1]: var-lib-containers-storage-overlay-07ec1604a0e6bda464512819a1cb8872f53424a7be4f4efb0b1e41218cf3b73a-merged.mount: Deactivated successfully.
Dec  7 14:51:03 np0005549633 podman[85272]: 2025-12-07 19:51:03.415043955 +0000 UTC m=+13.191764304 container remove 3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f (image=quay.io/ceph/ceph:v19, name=boring_cohen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 14:51:03 np0005549633 systemd[1]: libpod-conmon-3aa2b0fce68e42a0f707ded01d703103a8ce2f2e9eb5568363c8de17e683e53f.scope: Deactivated successfully.
Dec  7 14:51:03 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v77: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:51:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:03 np0005549633 python3[85353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:03 np0005549633 podman[85354]: 2025-12-07 19:51:03.939340227 +0000 UTC m=+0.080726794 container create 92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:03 np0005549633 systemd[1]: Started libpod-conmon-92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2.scope.
Dec  7 14:51:03 np0005549633 podman[85354]: 2025-12-07 19:51:03.908101127 +0000 UTC m=+0.049487744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:04 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:04 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33680d82e8ba3523ef48e879bb71af40b74332daf1ffb1a432d85665d83dcbf1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:04 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33680d82e8ba3523ef48e879bb71af40b74332daf1ffb1a432d85665d83dcbf1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:04 np0005549633 podman[85354]: 2025-12-07 19:51:04.039144098 +0000 UTC m=+0.180530635 container init 92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 14:51:04 np0005549633 podman[85354]: 2025-12-07 19:51:04.051148926 +0000 UTC m=+0.192535463 container start 92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  7 14:51:04 np0005549633 podman[85354]: 2025-12-07 19:51:04.055749435 +0000 UTC m=+0.197136012 container attach 92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:04 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1502560927; not ready for session (expect reconnect)
Dec  7 14:51:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:51:04 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:51:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  7 14:51:04 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 14:51:04 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4277614920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Dec  7 14:51:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 21 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=21 pruub=13.664526939s) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active pruub 61.794506073s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 21 pg[2.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=21 pruub=13.664526939s) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown pruub 61.794506073s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Dec  7 14:51:05 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 5dbf7cfc-4714-416e-971a-7814c2ce48ec (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3451380409' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.orbdku", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: Deploying daemon mgr.compute-2.orbdku on compute-2
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:05 np0005549633 ceph-mgr[74680]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec  7 14:51:05 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:05.181+0000 7f957f554640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec  7 14:51:05 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v79: 36 pgs: 1 creating+peering, 31 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:51:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4277614920' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Dec  7 14:51:06 np0005549633 sharp_meitner[85369]: pool 'cephfs.cephfs.meta' created
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev a6d2d3ff-2503-43a6-aea0-bdf405a01e4e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 2f9a987b-2077-42d8-8f24-60e753490198 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 2f9a987b-2077-42d8-8f24-60e753490198 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 3 seconds
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 5dbf7cfc-4714-416e-971a-7814c2ce48ec (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 5dbf7cfc-4714-416e-971a-7814c2ce48ec (PG autoscaler increasing pool 3 PGs from 1 to 32) in 1 seconds
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev a6d2d3ff-2503-43a6-aea0-bdf405a01e4e (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event a6d2d3ff-2503-43a6-aea0-bdf405a01e4e (PG autoscaler increasing pool 4 PGs from 1 to 32) in 0 seconds
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1d( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1e( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1f( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.b( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.a( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.9( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.8( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.6( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.7( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.5( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1c( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.2( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.4( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.3( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.c( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.d( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.e( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.f( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.10( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.11( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.12( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.13( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.15( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.14( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.16( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.17( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.18( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.19( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1a( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1b( empty local-lis/les=14/15 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.8( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.7( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.2( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.3( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.11( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.16( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.17( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.14( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1a( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 22 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=14/14 les/c/f=15/15/0 sis=21) [1] r=0 lpr=21 pi=[14,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/4277614920' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/4277614920' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:06 np0005549633 systemd[1]: libpod-92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2.scope: Deactivated successfully.
Dec  7 14:51:06 np0005549633 podman[85354]: 2025-12-07 19:51:06.15331211 +0000 UTC m=+2.294698647 container died 92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 7 completed events
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:51:06 np0005549633 systemd[1]: var-lib-containers-storage-overlay-33680d82e8ba3523ef48e879bb71af40b74332daf1ffb1a432d85665d83dcbf1-merged.mount: Deactivated successfully.
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [progress WARNING root] Starting Global Recovery Event,65 pgs not in active + clean state
Dec  7 14:51:06 np0005549633 systemd[75727]: Starting Mark boot as successful...
Dec  7 14:51:06 np0005549633 systemd[75727]: Finished Mark boot as successful.
Dec  7 14:51:06 np0005549633 podman[85354]: 2025-12-07 19:51:06.199722606 +0000 UTC m=+2.341109153 container remove 92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2 (image=quay.io/ceph/ceph:v19, name=sharp_meitner, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 14:51:06 np0005549633 systemd[1]: libpod-conmon-92a0e00491311f9213c43d5de3c92fde133cb11d8ea7942ef67395bd31e006c2.scope: Deactivated successfully.
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.cgejnh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.cgejnh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.cgejnh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:51:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.cgejnh on compute-1
Dec  7 14:51:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.cgejnh on compute-1
Dec  7 14:51:06 np0005549633 python3[85432]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:06 np0005549633 podman[85433]: 2025-12-07 19:51:06.654940626 +0000 UTC m=+0.063069145 container create d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf (image=quay.io/ceph/ceph:v19, name=quizzical_diffie, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:06 np0005549633 systemd[1]: Started libpod-conmon-d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf.scope.
Dec  7 14:51:06 np0005549633 podman[85433]: 2025-12-07 19:51:06.630054451 +0000 UTC m=+0.038182960 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:06 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:06 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7ac44319fde62987a77170c7d3311dbbdfe6548e8307837dfd2cb2327a36b3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:06 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b7ac44319fde62987a77170c7d3311dbbdfe6548e8307837dfd2cb2327a36b3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:06 np0005549633 podman[85433]: 2025-12-07 19:51:06.758404316 +0000 UTC m=+0.166532835 container init d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf (image=quay.io/ceph/ceph:v19, name=quizzical_diffie, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 14:51:06 np0005549633 podman[85433]: 2025-12-07 19:51:06.767978851 +0000 UTC m=+0.176107360 container start d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf (image=quay.io/ceph/ceph:v19, name=quizzical_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True)
Dec  7 14:51:06 np0005549633 podman[85433]: 2025-12-07 19:51:06.772887987 +0000 UTC m=+0.181016476 container attach d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf (image=quay.io/ceph/ceph:v19, name=quizzical_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  7 14:51:06 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1620977435' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:51:07 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v81: 68 pgs: 1 peering, 1 creating+peering, 63 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.cgejnh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.cgejnh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: Deploying daemon mgr.compute-1.cgejnh on compute-1
Dec  7 14:51:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:07 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  7 14:51:07 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  7 14:51:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:51:08 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:51:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  7 14:51:08 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec  7 14:51:08 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec  7 14:51:09 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v83: 68 pgs: 1 peering, 31 unknown, 36 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:09 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:51:09 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:09 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  7 14:51:09 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1620977435' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Dec  7 14:51:10 np0005549633 quizzical_diffie[85448]: pool 'cephfs.cephfs.data' created
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1620977435' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:10 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Dec  7 14:51:10 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:10 np0005549633 systemd[1]: libpod-d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf.scope: Deactivated successfully.
Dec  7 14:51:10 np0005549633 podman[85433]: 2025-12-07 19:51:10.274205887 +0000 UTC m=+3.682334406 container died d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf (image=quay.io/ceph/ceph:v19, name=quizzical_diffie, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 14:51:10 np0005549633 systemd[1]: var-lib-containers-storage-overlay-0b7ac44319fde62987a77170c7d3311dbbdfe6548e8307837dfd2cb2327a36b3-merged.mount: Deactivated successfully.
Dec  7 14:51:10 np0005549633 podman[85433]: 2025-12-07 19:51:10.329759199 +0000 UTC m=+3.737887718 container remove d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf (image=quay.io/ceph/ceph:v19, name=quizzical_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:10 np0005549633 systemd[1]: libpod-conmon-d06ad826148e9c7afc288f8e44828ddaae2130931658bc272edafdabb9c7dfdf.scope: Deactivated successfully.
Dec  7 14:51:10 np0005549633 python3[85511]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:10 np0005549633 podman[85512]: 2025-12-07 19:51:10.728813613 +0000 UTC m=+0.062592334 container create 4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0 (image=quay.io/ceph/ceph:v19, name=sad_chatelet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:10 np0005549633 systemd[1]: Started libpod-conmon-4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0.scope.
Dec  7 14:51:10 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db68d11cd47ad033553e2403db76731c6bee5c2b17c187483cb4a609fe114e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:10 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4db68d11cd47ad033553e2403db76731c6bee5c2b17c187483cb4a609fe114e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:10 np0005549633 podman[85512]: 2025-12-07 19:51:10.707858223 +0000 UTC m=+0.041636984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:10 np0005549633 podman[85512]: 2025-12-07 19:51:10.813836798 +0000 UTC m=+0.147615569 container init 4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0 (image=quay.io/ceph/ceph:v19, name=sad_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Dec  7 14:51:10 np0005549633 podman[85512]: 2025-12-07 19:51:10.822936012 +0000 UTC m=+0.156714783 container start 4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0 (image=quay.io/ceph/ceph:v19, name=sad_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Dec  7 14:51:10 np0005549633 podman[85512]: 2025-12-07 19:51:10.828934292 +0000 UTC m=+0.162713053 container attach 4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0 (image=quay.io/ceph/ceph:v19, name=sad_chatelet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:10 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec  7 14:51:10 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec  7 14:51:11 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec  7 14:51:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2451944799' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  7 14:51:11 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  7 14:51:11 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v85: 100 pgs: 1 creating+peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:11 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:11 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  7 14:51:11 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  7 14:51:12 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mgr.compute-2.orbdku 192.168.122.102:0/3738147392; not ready for session (expect reconnect)
Dec  7 14:51:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku started
Dec  7 14:51:12 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  7 14:51:12 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  7 14:51:13 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mgr.compute-2.orbdku 192.168.122.102:0/3738147392; not ready for session (expect reconnect)
Dec  7 14:51:13 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v86: 100 pgs: 1 creating+peering, 31 unknown, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:13 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:13 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 0ea38b15-864b-48cb-9261-d468afc6c4bb (Updating mgr deployment (+2 -> 3))
Dec  7 14:51:13 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 0ea38b15-864b-48cb-9261-d468afc6c4bb (Updating mgr deployment (+2 -> 3)) in 10 seconds
Dec  7 14:51:13 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  7 14:51:13 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Dec  7 14:51:13 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2451944799' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Dec  7 14:51:14 np0005549633 sad_chatelet[85527]: enabled application 'rbd' on pool 'vms'
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1620977435' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2451944799' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Dec  7 14:51:14 np0005549633 systemd[1]: libpod-4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0.scope: Deactivated successfully.
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.dyzcyj(active, since 2m), standbys: compute-2.orbdku
Dec  7 14:51:14 np0005549633 podman[85512]: 2025-12-07 19:51:14.041413155 +0000 UTC m=+3.375191936 container died 4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0 (image=quay.io/ceph/ceph:v19, name=sad_chatelet, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.orbdku", "id": "compute-2.orbdku"} v 0)
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.orbdku", "id": "compute-2.orbdku"}]: dispatch
Dec  7 14:51:14 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:14 np0005549633 systemd[1]: var-lib-containers-storage-overlay-4db68d11cd47ad033553e2403db76731c6bee5c2b17c187483cb4a609fe114e4-merged.mount: Deactivated successfully.
Dec  7 14:51:14 np0005549633 podman[85512]: 2025-12-07 19:51:14.114829841 +0000 UTC m=+3.448608592 container remove 4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0 (image=quay.io/ceph/ceph:v19, name=sad_chatelet, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 14:51:14 np0005549633 systemd[1]: libpod-conmon-4b52937c60555ad5d0e811ee737f87c24ba017f651d20bf0ed23cd64e23a08d0.scope: Deactivated successfully.
Dec  7 14:51:14 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mgr.compute-1.cgejnh 192.168.122.101:0/418937768; not ready for session (expect reconnect)
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:14 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev e8e9ba42-7ba9-4496-924b-0c6468f9cf47 (Updating crash deployment (+1 -> 3))
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh started
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:51:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:51:14 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec  7 14:51:14 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec  7 14:51:14 np0005549633 python3[85587]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:14 np0005549633 podman[85588]: 2025-12-07 19:51:14.51885876 +0000 UTC m=+0.062520614 container create 5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a (image=quay.io/ceph/ceph:v19, name=romantic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 14:51:14 np0005549633 systemd[1]: Started libpod-conmon-5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a.scope.
Dec  7 14:51:14 np0005549633 podman[85588]: 2025-12-07 19:51:14.487531448 +0000 UTC m=+0.031193352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:14 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78eefee02a41e97ba6cc6b1d5caf23a77c6a0d3ec7e89876be481b4d660f94b0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:14 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78eefee02a41e97ba6cc6b1d5caf23a77c6a0d3ec7e89876be481b4d660f94b0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:14 np0005549633 podman[85588]: 2025-12-07 19:51:14.637051867 +0000 UTC m=+0.180713761 container init 5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a (image=quay.io/ceph/ceph:v19, name=romantic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 14:51:14 np0005549633 podman[85588]: 2025-12-07 19:51:14.646523729 +0000 UTC m=+0.190185583 container start 5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a (image=quay.io/ceph/ceph:v19, name=romantic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 14:51:14 np0005549633 podman[85588]: 2025-12-07 19:51:14.651349363 +0000 UTC m=+0.195011227 container attach 5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a (image=quay.io/ceph/ceph:v19, name=romantic_borg, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 14:51:14 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Dec  7 14:51:14 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3564518393' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  7 14:51:15 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from mgr.compute-1.cgejnh 192.168.122.101:0/418937768; not ready for session (expect reconnect)
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2451944799' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: Deploying daemon crash.compute-2 on compute-2
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Dec  7 14:51:15 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v89: 100 pgs: 32 peering, 1 creating+peering, 67 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.dyzcyj(active, since 2m), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.cgejnh", "id": "compute-1.cgejnh"} v 0)
Dec  7 14:51:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.cgejnh", "id": "compute-1.cgejnh"}]: dispatch
Dec  7 14:51:15 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  7 14:51:15 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  7 14:51:16 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 8 completed events
Dec  7 14:51:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:51:16 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3564518393' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  7 14:51:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  7 14:51:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:16 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  7 14:51:16 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  7 14:51:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3564518393' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  7 14:51:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Dec  7 14:51:17 np0005549633 romantic_borg[85603]: enabled application 'rbd' on pool 'volumes'
Dec  7 14:51:17 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Dec  7 14:51:17 np0005549633 systemd[1]: libpod-5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a.scope: Deactivated successfully.
Dec  7 14:51:17 np0005549633 podman[85588]: 2025-12-07 19:51:17.198471367 +0000 UTC m=+2.742133181 container died 5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a (image=quay.io/ceph/ceph:v19, name=romantic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 14:51:17 np0005549633 systemd[1]: var-lib-containers-storage-overlay-78eefee02a41e97ba6cc6b1d5caf23a77c6a0d3ec7e89876be481b4d660f94b0-merged.mount: Deactivated successfully.
Dec  7 14:51:17 np0005549633 podman[85588]: 2025-12-07 19:51:17.270500192 +0000 UTC m=+2.814162016 container remove 5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a (image=quay.io/ceph/ceph:v19, name=romantic_borg, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:17 np0005549633 systemd[1]: libpod-conmon-5d7aae28e628d00bfdbe27f3f2b2539acfd0d9ff7910ffcfe31360bce589260a.scope: Deactivated successfully.
Dec  7 14:51:17 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v91: 100 pgs: 32 peering, 68 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:17 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:17 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3564518393' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  7 14:51:17 np0005549633 python3[85666]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:17 np0005549633 podman[85667]: 2025-12-07 19:51:17.733355665 +0000 UTC m=+0.066360775 container create a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6 (image=quay.io/ceph/ceph:v19, name=sharp_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 14:51:17 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:17 np0005549633 systemd[1]: Started libpod-conmon-a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6.scope.
Dec  7 14:51:17 np0005549633 podman[85667]: 2025-12-07 19:51:17.710932204 +0000 UTC m=+0.043937354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:17 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:17 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add49665a28f4c42b61ef994c50ff3d44449daa7c58f00b001c26f95a5d20cdb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:17 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add49665a28f4c42b61ef994c50ff3d44449daa7c58f00b001c26f95a5d20cdb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:17 np0005549633 podman[85667]: 2025-12-07 19:51:17.866712386 +0000 UTC m=+0.199717516 container init a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6 (image=quay.io/ceph/ceph:v19, name=sharp_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 14:51:17 np0005549633 podman[85667]: 2025-12-07 19:51:17.874692718 +0000 UTC m=+0.207697868 container start a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6 (image=quay.io/ceph/ceph:v19, name=sharp_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 14:51:17 np0005549633 podman[85667]: 2025-12-07 19:51:17.879177854 +0000 UTC m=+0.212182984 container attach a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6 (image=quay.io/ceph/ceph:v19, name=sharp_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 14:51:17 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  7 14:51:17 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2457198820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2457198820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:18 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev e8e9ba42-7ba9-4496-924b-0c6468f9cf47 (Updating crash deployment (+1 -> 3))
Dec  7 14:51:18 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event e8e9ba42-7ba9-4496-924b-0c6468f9cf47 (Updating crash deployment (+1 -> 3)) in 5 seconds
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:51:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:51:18 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec  7 14:51:18 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2457198820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e28 e28: 2 total, 2 up, 2 in
Dec  7 14:51:19 np0005549633 sharp_mahavira[85682]: enabled application 'rbd' on pool 'backups'
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 2 up, 2 in
Dec  7 14:51:19 np0005549633 systemd[1]: libpod-a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6.scope: Deactivated successfully.
Dec  7 14:51:19 np0005549633 podman[85667]: 2025-12-07 19:51:19.26153366 +0000 UTC m=+1.594538800 container died a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6 (image=quay.io/ceph/ceph:v19, name=sharp_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:19 np0005549633 systemd[1]: var-lib-containers-storage-overlay-add49665a28f4c42b61ef994c50ff3d44449daa7c58f00b001c26f95a5d20cdb-merged.mount: Deactivated successfully.
Dec  7 14:51:19 np0005549633 podman[85667]: 2025-12-07 19:51:19.306034845 +0000 UTC m=+1.639039955 container remove a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6 (image=quay.io/ceph/ceph:v19, name=sharp_mahavira, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:19 np0005549633 systemd[1]: libpod-conmon-a0e443e6743579c7249a41ab79e9dd680e2baa8030228f747fbf2ded13098da6.scope: Deactivated successfully.
Dec  7 14:51:19 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v93: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:19 np0005549633 podman[85830]: 2025-12-07 19:51:19.578653056 +0000 UTC m=+0.089129014 container create be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_khayyam, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:19 np0005549633 systemd[1]: Started libpod-conmon-be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1.scope.
Dec  7 14:51:19 np0005549633 podman[85830]: 2025-12-07 19:51:19.545172988 +0000 UTC m=+0.055649016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:51:19 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:19 np0005549633 podman[85830]: 2025-12-07 19:51:19.665879978 +0000 UTC m=+0.176356006 container init be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:19 np0005549633 python3[85839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:19 np0005549633 podman[85830]: 2025-12-07 19:51:19.673798328 +0000 UTC m=+0.184274296 container start be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  7 14:51:19 np0005549633 condescending_khayyam[85849]: 167 167
Dec  7 14:51:19 np0005549633 podman[85830]: 2025-12-07 19:51:19.67808389 +0000 UTC m=+0.188559858 container attach be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_khayyam, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 14:51:19 np0005549633 systemd[1]: libpod-be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1.scope: Deactivated successfully.
Dec  7 14:51:19 np0005549633 podman[85830]: 2025-12-07 19:51:19.679440119 +0000 UTC m=+0.189916077 container died be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_khayyam, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 14:51:19 np0005549633 systemd[1]: var-lib-containers-storage-overlay-e1f76a531bacf40c1776a489abd113671be1ca8403adeef5d180c52bfa322fb8-merged.mount: Deactivated successfully.
Dec  7 14:51:19 np0005549633 podman[85830]: 2025-12-07 19:51:19.72837919 +0000 UTC m=+0.238855128 container remove be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_khayyam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 14:51:19 np0005549633 systemd[1]: libpod-conmon-be1c690dcdac65d122fd5829e958e822cf5c6b121edc52da973a3b6f782953b1.scope: Deactivated successfully.
Dec  7 14:51:19 np0005549633 podman[85853]: 2025-12-07 19:51:19.789612664 +0000 UTC m=+0.058398104 container create 38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e (image=quay.io/ceph/ceph:v19, name=elegant_austin, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2457198820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:19 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:51:19 np0005549633 systemd[1]: Started libpod-conmon-38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e.scope.
Dec  7 14:51:19 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.0 deep-scrub starts
Dec  7 14:51:19 np0005549633 podman[85853]: 2025-12-07 19:51:19.763983493 +0000 UTC m=+0.032768983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:19 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.0 deep-scrub ok
Dec  7 14:51:19 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f43019ce6bb9966b3e838900495f4b01a9cb622687503f9cd1c99f47a338c3b6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:19 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f43019ce6bb9966b3e838900495f4b01a9cb622687503f9cd1c99f47a338c3b6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:19 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:19 np0005549633 podman[85853]: 2025-12-07 19:51:19.888021975 +0000 UTC m=+0.156807435 container init 38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e (image=quay.io/ceph/ceph:v19, name=elegant_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  7 14:51:19 np0005549633 podman[85853]: 2025-12-07 19:51:19.893867001 +0000 UTC m=+0.162652451 container start 38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e (image=quay.io/ceph/ceph:v19, name=elegant_austin, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 14:51:19 np0005549633 podman[85853]: 2025-12-07 19:51:19.898164633 +0000 UTC m=+0.166950073 container attach 38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e (image=quay.io/ceph/ceph:v19, name=elegant_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:19 np0005549633 podman[85890]: 2025-12-07 19:51:19.933306127 +0000 UTC m=+0.048417330 container create 0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:19 np0005549633 systemd[1]: Started libpod-conmon-0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f.scope.
Dec  7 14:51:20 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:20 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017581ae66dc4c890500306219556fbedaef6d587d5692741dbcaf6ab455182f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:20 np0005549633 podman[85890]: 2025-12-07 19:51:19.913094064 +0000 UTC m=+0.028205297 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:51:20 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017581ae66dc4c890500306219556fbedaef6d587d5692741dbcaf6ab455182f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:20 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017581ae66dc4c890500306219556fbedaef6d587d5692741dbcaf6ab455182f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:20 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017581ae66dc4c890500306219556fbedaef6d587d5692741dbcaf6ab455182f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:20 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017581ae66dc4c890500306219556fbedaef6d587d5692741dbcaf6ab455182f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:20 np0005549633 podman[85890]: 2025-12-07 19:51:20.027261683 +0000 UTC m=+0.142372946 container init 0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:20 np0005549633 podman[85890]: 2025-12-07 19:51:20.046032207 +0000 UTC m=+0.161143420 container start 0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Dec  7 14:51:20 np0005549633 podman[85890]: 2025-12-07 19:51:20.049873998 +0000 UTC m=+0.164985211 container attach 0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.18( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.1b( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.1c( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.1d( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.1a( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.d( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.a( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.c( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.e( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.9( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.1( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.5( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.3( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.5( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.a( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.d( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.c( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.8( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.f( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.e( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.9( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.11( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.10( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.13( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.15( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.15( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.14( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.13( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.16( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[4.1f( empty local-lis/les=0/0 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[3.1a( empty local-lis/les=0/0 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.618140221s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213752747s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.618114471s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213752747s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617812157s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213676453s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617794991s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213676453s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617543221s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213630676s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617527962s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213630676s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617204666s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213462830s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617188454s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213462830s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617022514s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213455200s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.617002487s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213455200s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616961479s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213554382s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616947174s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213554382s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616663933s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213371277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616649628s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213371277s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616455078s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213294983s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616437912s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213294983s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616044044s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213088989s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.616024971s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213088989s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615881920s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213096619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615869522s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213096619s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615685463s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.213020325s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615672112s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.213020325s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615294456s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.212760925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615282059s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.212760925s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615205765s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.212760925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.615194321s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.212760925s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.549149513s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.146827698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.549137115s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.146827698s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.614391327s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 active pruub 73.212760925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=29 pruub=9.614373207s) [0] r=-1 lpr=29 pi=[21,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.212760925s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2002676497' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  7 14:51:20 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:51:20 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2002676497' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  7 14:51:21 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 9 completed events
Dec  7 14:51:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:51:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  7 14:51:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:21 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 1db24a2c-3f61-448c-9f9b-fc6de5a1e281 (Global Recovery Event) in 15 seconds
Dec  7 14:51:21 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v95: 100 pgs: 31 peering, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "9f04f14e-6c8c-454d-86ce-8847e58a76c7"} v 0)
Dec  7 14:51:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f04f14e-6c8c-454d-86ce-8847e58a76c7"}]: dispatch
Dec  7 14:51:21 np0005549633 musing_ishizaka[85908]: --> passed data devices: 0 physical, 1 LVM
Dec  7 14:51:21 np0005549633 musing_ishizaka[85908]: --> All data devices are unavailable
Dec  7 14:51:21 np0005549633 systemd[1]: libpod-0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f.scope: Deactivated successfully.
Dec  7 14:51:21 np0005549633 systemd[1]: libpod-0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f.scope: Consumed 1.595s CPU time.
Dec  7 14:51:21 np0005549633 podman[85890]: 2025-12-07 19:51:21.62215915 +0000 UTC m=+1.737270403 container died 0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec  7 14:51:21 np0005549633 systemd[1]: var-lib-containers-storage-overlay-017581ae66dc4c890500306219556fbedaef6d587d5692741dbcaf6ab455182f-merged.mount: Deactivated successfully.
Dec  7 14:51:21 np0005549633 podman[85890]: 2025-12-07 19:51:21.796716387 +0000 UTC m=+1.911827630 container remove 0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:21 np0005549633 systemd[1]: libpod-conmon-0a0afda1299a6d924ead41a839e25fba9e9cb635681eda0194ec8e33b129a53f.scope: Deactivated successfully.
Dec  7 14:51:21 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.18 deep-scrub starts
Dec  7 14:51:21 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.18 deep-scrub ok
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2002676497' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Dec  7 14:51:22 np0005549633 elegant_austin[85886]: enabled application 'rbd' on pool 'images'
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e30 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.14( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.16( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.13( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.d( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.f( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.c( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.5( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.3( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.10( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.a( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=24/24 les/c/f=26/26/0 sis=29) [1] r=0 lpr=29 pi=[24,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 30 pg[3.1c( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=22/22 les/c/f=23/23/0 sis=29) [1] r=0 lpr=29 pi=[22,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:51:22 np0005549633 systemd[1]: libpod-38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e.scope: Deactivated successfully.
Dec  7 14:51:22 np0005549633 podman[85853]: 2025-12-07 19:51:22.149650181 +0000 UTC m=+2.418435621 container died 38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e (image=quay.io/ceph/ceph:v19, name=elegant_austin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:51:22 np0005549633 systemd[1]: var-lib-containers-storage-overlay-f43019ce6bb9966b3e838900495f4b01a9cb622687503f9cd1c99f47a338c3b6-merged.mount: Deactivated successfully.
Dec  7 14:51:22 np0005549633 podman[85853]: 2025-12-07 19:51:22.19062555 +0000 UTC m=+2.459410990 container remove 38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e (image=quay.io/ceph/ceph:v19, name=elegant_austin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 14:51:22 np0005549633 systemd[1]: libpod-conmon-38534bd387b5cd51a27305c3fd605169ead6ef2e3346d6782459f9520f30eb1e.scope: Deactivated successfully.
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9f04f14e-6c8c-454d-86ce-8847e58a76c7"}]': finished
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.102:0/3716003882' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f04f14e-6c8c-454d-86ce-8847e58a76c7"}]: dispatch
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9f04f14e-6c8c-454d-86ce-8847e58a76c7"}]: dispatch
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:22 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:22 np0005549633 python3[86066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:22 np0005549633 podman[86085]: 2025-12-07 19:51:22.545975976 +0000 UTC m=+0.051703190 container create 5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hellman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 14:51:22 np0005549633 systemd[1]: Started libpod-conmon-5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03.scope.
Dec  7 14:51:22 np0005549633 podman[86099]: 2025-12-07 19:51:22.610094393 +0000 UTC m=+0.052918017 container create 66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b (image=quay.io/ceph/ceph:v19, name=angry_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:22 np0005549633 podman[86085]: 2025-12-07 19:51:22.522984843 +0000 UTC m=+0.028712057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:51:22 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:22 np0005549633 systemd[1]: Started libpod-conmon-66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b.scope.
Dec  7 14:51:22 np0005549633 podman[86085]: 2025-12-07 19:51:22.640639108 +0000 UTC m=+0.146366332 container init 5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 14:51:22 np0005549633 podman[86085]: 2025-12-07 19:51:22.652176266 +0000 UTC m=+0.157903450 container start 5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 14:51:22 np0005549633 podman[86085]: 2025-12-07 19:51:22.65609775 +0000 UTC m=+0.161824954 container attach 5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 14:51:22 np0005549633 quirky_hellman[86114]: 167 167
Dec  7 14:51:22 np0005549633 podman[86085]: 2025-12-07 19:51:22.660813261 +0000 UTC m=+0.166540455 container died 5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hellman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 14:51:22 np0005549633 podman[86099]: 2025-12-07 19:51:22.587706152 +0000 UTC m=+0.030529776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:22 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:22 np0005549633 systemd[1]: libpod-5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03.scope: Deactivated successfully.
Dec  7 14:51:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa926476488461f0ec21d8284ec41b874ff29ff539d7d6ff7964b9dbe41cc27/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:22 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa926476488461f0ec21d8284ec41b874ff29ff539d7d6ff7964b9dbe41cc27/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:22 np0005549633 systemd[1]: var-lib-containers-storage-overlay-33912bb2c8746aa9111b5b1326687d6708a3a608ef696d03f9cefe03b2bd4d9e-merged.mount: Deactivated successfully.
Dec  7 14:51:22 np0005549633 podman[86085]: 2025-12-07 19:51:22.734090674 +0000 UTC m=+0.239817878 container remove 5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_hellman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Dec  7 14:51:22 np0005549633 podman[86099]: 2025-12-07 19:51:22.742387392 +0000 UTC m=+0.185211036 container init 66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b (image=quay.io/ceph/ceph:v19, name=angry_wright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:22 np0005549633 podman[86099]: 2025-12-07 19:51:22.749103445 +0000 UTC m=+0.191927059 container start 66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b (image=quay.io/ceph/ceph:v19, name=angry_wright, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:22 np0005549633 podman[86099]: 2025-12-07 19:51:22.754023742 +0000 UTC m=+0.196847346 container attach 66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b (image=quay.io/ceph/ceph:v19, name=angry_wright, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:22 np0005549633 systemd[1]: libpod-conmon-5b5a6084050e19965ef1b0d0e6cd05c98e32f7b15d465caedf5ff43bb537fd03.scope: Deactivated successfully.
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  7 14:51:22 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  7 14:51:22 np0005549633 podman[86144]: 2025-12-07 19:51:22.925372058 +0000 UTC m=+0.059062938 container create 39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 14:51:22 np0005549633 systemd[1]: Started libpod-conmon-39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b.scope.
Dec  7 14:51:22 np0005549633 podman[86144]: 2025-12-07 19:51:22.900110096 +0000 UTC m=+0.033801066 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:51:22 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:23 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5692ae71e9c9b8743817b45728d0232a4e161a3ee8a0e69a929b8fd5f939caf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:23 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5692ae71e9c9b8743817b45728d0232a4e161a3ee8a0e69a929b8fd5f939caf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:23 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5692ae71e9c9b8743817b45728d0232a4e161a3ee8a0e69a929b8fd5f939caf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:23 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5692ae71e9c9b8743817b45728d0232a4e161a3ee8a0e69a929b8fd5f939caf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:23 np0005549633 podman[86144]: 2025-12-07 19:51:23.020400628 +0000 UTC m=+0.154091518 container init 39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 14:51:23 np0005549633 podman[86144]: 2025-12-07 19:51:23.028856069 +0000 UTC m=+0.162546979 container start 39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:23 np0005549633 podman[86144]: 2025-12-07 19:51:23.034103692 +0000 UTC m=+0.167794592 container attach 39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1557592222' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]: {
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:    "1": [
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:        {
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "devices": [
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "/dev/loop3"
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            ],
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "lv_name": "ceph_lv0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "lv_size": "21470642176",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a8ac706f-8288-541e-8e56-e1124d9b483d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bde32eb9-6f67-49a9-82c5-0c88a97712bc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "lv_uuid": "SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "name": "ceph_lv0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "tags": {
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.block_uuid": "SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.cephx_lockbox_secret": "",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.cluster_fsid": "a8ac706f-8288-541e-8e56-e1124d9b483d",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.cluster_name": "ceph",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.crush_device_class": "",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.encrypted": "0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.osd_fsid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.osd_id": "1",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.type": "block",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.vdo": "0",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:                "ceph.with_tpm": "0"
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            },
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "type": "block",
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:            "vg_name": "ceph_vg0"
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:        }
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]:    ]
Dec  7 14:51:23 np0005549633 dazzling_almeida[86180]: }
Dec  7 14:51:23 np0005549633 systemd[1]: libpod-39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b.scope: Deactivated successfully.
Dec  7 14:51:23 np0005549633 podman[86144]: 2025-12-07 19:51:23.372886743 +0000 UTC m=+0.506577643 container died 39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  7 14:51:23 np0005549633 systemd[1]: var-lib-containers-storage-overlay-5692ae71e9c9b8743817b45728d0232a4e161a3ee8a0e69a929b8fd5f939caf7-merged.mount: Deactivated successfully.
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2002676497' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9f04f14e-6c8c-454d-86ce-8847e58a76c7"}]': finished
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1557592222' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  7 14:51:23 np0005549633 podman[86144]: 2025-12-07 19:51:23.448762701 +0000 UTC m=+0.582453581 container remove 39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_almeida, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:23 np0005549633 systemd[1]: libpod-conmon-39938d5f0f23804728462893bb6a71fac9cae049fe0325dcb52363bc80c5951b.scope: Deactivated successfully.
Dec  7 14:51:23 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v98: 100 pgs: 31 peering, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1557592222' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Dec  7 14:51:23 np0005549633 angry_wright[86119]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:23 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:23 np0005549633 systemd[1]: libpod-66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b.scope: Deactivated successfully.
Dec  7 14:51:23 np0005549633 podman[86099]: 2025-12-07 19:51:23.638139245 +0000 UTC m=+1.080962869 container died 66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b (image=quay.io/ceph/ceph:v19, name=angry_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 14:51:23 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8fa926476488461f0ec21d8284ec41b874ff29ff539d7d6ff7964b9dbe41cc27-merged.mount: Deactivated successfully.
Dec  7 14:51:23 np0005549633 podman[86099]: 2025-12-07 19:51:23.71387786 +0000 UTC m=+1.156701454 container remove 66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b (image=quay.io/ceph/ceph:v19, name=angry_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:23 np0005549633 systemd[1]: libpod-conmon-66dd3f7a6765357bb80e009a730f48f06be12cc997f8b2c6db3ef6bcfbf9e16b.scope: Deactivated successfully.
Dec  7 14:51:23 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec  7 14:51:23 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec  7 14:51:24 np0005549633 python3[86292]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:24 np0005549633 podman[86318]: 2025-12-07 19:51:24.078344412 +0000 UTC m=+0.050861953 container create 4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c (image=quay.io/ceph/ceph:v19, name=pedantic_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:24 np0005549633 systemd[1]: Started libpod-conmon-4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c.scope.
Dec  7 14:51:24 np0005549633 podman[86344]: 2025-12-07 19:51:24.135141561 +0000 UTC m=+0.049319859 container create 78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_blackwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 14:51:24 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:24 np0005549633 podman[86318]: 2025-12-07 19:51:24.057153008 +0000 UTC m=+0.029670579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47c9584bcb9877ddac6b76687c3b97551eb665401c81e4b62418cbd549bd2c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d47c9584bcb9877ddac6b76687c3b97551eb665401c81e4b62418cbd549bd2c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:24 np0005549633 systemd[1]: Started libpod-conmon-78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4.scope.
Dec  7 14:51:24 np0005549633 podman[86318]: 2025-12-07 19:51:24.164632134 +0000 UTC m=+0.137149755 container init 4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c (image=quay.io/ceph/ceph:v19, name=pedantic_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 14:51:24 np0005549633 podman[86318]: 2025-12-07 19:51:24.171267276 +0000 UTC m=+0.143784867 container start 4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c (image=quay.io/ceph/ceph:v19, name=pedantic_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Dec  7 14:51:24 np0005549633 podman[86318]: 2025-12-07 19:51:24.176103391 +0000 UTC m=+0.148620982 container attach 4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c (image=quay.io/ceph/ceph:v19, name=pedantic_brahmagupta, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:24 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:24 np0005549633 podman[86344]: 2025-12-07 19:51:24.200991564 +0000 UTC m=+0.115169872 container init 78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_blackwell, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:24 np0005549633 podman[86344]: 2025-12-07 19:51:24.20637407 +0000 UTC m=+0.120552408 container start 78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_blackwell, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:24 np0005549633 podman[86344]: 2025-12-07 19:51:24.11411651 +0000 UTC m=+0.028294808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:51:24 np0005549633 podman[86344]: 2025-12-07 19:51:24.210825555 +0000 UTC m=+0.125003893 container attach 78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_blackwell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 14:51:24 np0005549633 youthful_blackwell[86365]: 167 167
Dec  7 14:51:24 np0005549633 systemd[1]: libpod-78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4.scope: Deactivated successfully.
Dec  7 14:51:24 np0005549633 podman[86344]: 2025-12-07 19:51:24.215235489 +0000 UTC m=+0.129413777 container died 78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:24 np0005549633 systemd[1]: var-lib-containers-storage-overlay-c414cf3316ec6f1080703eb13b06887f6aed4da57a8444596f7ad9dc1b635ce2-merged.mount: Deactivated successfully.
Dec  7 14:51:24 np0005549633 podman[86344]: 2025-12-07 19:51:24.253688415 +0000 UTC m=+0.167866713 container remove 78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:24 np0005549633 systemd[1]: libpod-conmon-78c934d28eed103a0cb9f781606e304cef5f6abbb08599d99e61d3603a0ea5b4.scope: Deactivated successfully.
Dec  7 14:51:24 np0005549633 podman[86408]: 2025-12-07 19:51:24.502257699 +0000 UTC m=+0.059908236 container create 515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  7 14:51:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec  7 14:51:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1470392527' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  7 14:51:24 np0005549633 systemd[1]: Started libpod-conmon-515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc.scope.
Dec  7 14:51:24 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:24 np0005549633 podman[86408]: 2025-12-07 19:51:24.469070578 +0000 UTC m=+0.026721155 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:51:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/088fd996e882150ff41a90e9eae15e611f176aae742b9409d13087af55bd2029/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/088fd996e882150ff41a90e9eae15e611f176aae742b9409d13087af55bd2029/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/088fd996e882150ff41a90e9eae15e611f176aae742b9409d13087af55bd2029/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/088fd996e882150ff41a90e9eae15e611f176aae742b9409d13087af55bd2029/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:24 np0005549633 podman[86408]: 2025-12-07 19:51:24.581939639 +0000 UTC m=+0.139590216 container init 515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:24 np0005549633 podman[86408]: 2025-12-07 19:51:24.588649214 +0000 UTC m=+0.146299751 container start 515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:24 np0005549633 podman[86408]: 2025-12-07 19:51:24.592894055 +0000 UTC m=+0.150544622 container attach 515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 14:51:24 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1557592222' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  7 14:51:24 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1470392527' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  7 14:51:25 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  7 14:51:25 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  7 14:51:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  7 14:51:25 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v100: 100 pgs: 31 peering, 69 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:25 np0005549633 lvm[86487]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:51:25 np0005549633 lvm[86487]: VG ceph_vg0 finished
Dec  7 14:51:25 np0005549633 keen_lewin[86425]: {}
Dec  7 14:51:25 np0005549633 systemd[1]: libpod-515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc.scope: Deactivated successfully.
Dec  7 14:51:25 np0005549633 systemd[1]: libpod-515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc.scope: Consumed 1.331s CPU time.
Dec  7 14:51:25 np0005549633 podman[86408]: 2025-12-07 19:51:25.714933475 +0000 UTC m=+1.272584002 container died 515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:25 np0005549633 systemd[1]: var-lib-containers-storage-overlay-088fd996e882150ff41a90e9eae15e611f176aae742b9409d13087af55bd2029-merged.mount: Deactivated successfully.
Dec  7 14:51:25 np0005549633 podman[86408]: 2025-12-07 19:51:25.768093026 +0000 UTC m=+1.325743543 container remove 515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:25 np0005549633 systemd[1]: libpod-conmon-515ed2e700d8d9b5ba5e8bdf98a05cfa3bcb9b4473c0dd2c2be5c17354acd0cc.scope: Deactivated successfully.
Dec  7 14:51:25 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  7 14:51:25 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  7 14:51:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 10 completed events
Dec  7 14:51:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:51:26 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  7 14:51:26 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  7 14:51:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1470392527' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  7 14:51:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Dec  7 14:51:26 np0005549633 pedantic_brahmagupta[86357]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  7 14:51:26 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Dec  7 14:51:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:26 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:26 np0005549633 systemd[1]: libpod-4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c.scope: Deactivated successfully.
Dec  7 14:51:26 np0005549633 podman[86318]: 2025-12-07 19:51:26.997820507 +0000 UTC m=+2.970338048 container died 4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c (image=quay.io/ceph/ceph:v19, name=pedantic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:27 np0005549633 systemd[1]: var-lib-containers-storage-overlay-d47c9584bcb9877ddac6b76687c3b97551eb665401c81e4b62418cbd549bd2c5-merged.mount: Deactivated successfully.
Dec  7 14:51:27 np0005549633 podman[86318]: 2025-12-07 19:51:27.039483911 +0000 UTC m=+3.012001482 container remove 4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c (image=quay.io/ceph/ceph:v19, name=pedantic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:27 np0005549633 systemd[1]: libpod-conmon-4dbb91119478e73eed61c64488a1d9eab398cc341dffed38392c17f3a4d4ef2c.scope: Deactivated successfully.
Dec  7 14:51:27 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:27 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:27 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  7 14:51:27 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  7 14:51:28 np0005549633 python3[86588]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:51:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:51:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:28 np0005549633 python3[86659]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765137087.866525-37259-229655244058190/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:51:28 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  7 14:51:28 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  7 14:51:29 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 14:51:29 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  7 14:51:29 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1470392527' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  7 14:51:29 np0005549633 ceph-mon[74384]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:51:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:29 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:29 np0005549633 python3[86761]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:51:29 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v103: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:29 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  7 14:51:29 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  7 14:51:29 np0005549633 python3[86836]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765137089.0652664-37273-265390893041053/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=86f16f2cec580508e67372b24feed215102afeae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:51:30 np0005549633 python3[86886]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:30 np0005549633 ceph-mon[74384]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 14:51:30 np0005549633 ceph-mon[74384]: Cluster is now healthy
Dec  7 14:51:30 np0005549633 podman[86887]: 2025-12-07 19:51:30.431580767 +0000 UTC m=+0.064666909 container create aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c (image=quay.io/ceph/ceph:v19, name=jovial_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:30 np0005549633 systemd[1]: Started libpod-conmon-aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c.scope.
Dec  7 14:51:30 np0005549633 podman[86887]: 2025-12-07 19:51:30.410042615 +0000 UTC m=+0.043128827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:30 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:30 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51081a11529fe7c695417e8c607441c4e96d5305e7a5cbac8bf75fa59705071/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:30 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51081a11529fe7c695417e8c607441c4e96d5305e7a5cbac8bf75fa59705071/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:30 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c51081a11529fe7c695417e8c607441c4e96d5305e7a5cbac8bf75fa59705071/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:30 np0005549633 podman[86887]: 2025-12-07 19:51:30.548231921 +0000 UTC m=+0.181318093 container init aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c (image=quay.io/ceph/ceph:v19, name=jovial_ardinghelli, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  7 14:51:30 np0005549633 podman[86887]: 2025-12-07 19:51:30.561647058 +0000 UTC m=+0.194733220 container start aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c (image=quay.io/ceph/ceph:v19, name=jovial_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:30 np0005549633 podman[86887]: 2025-12-07 19:51:30.566262757 +0000 UTC m=+0.199348949 container attach aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c (image=quay.io/ceph/ceph:v19, name=jovial_ardinghelli, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 14:51:30 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec  7 14:51:30 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec  7 14:51:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  7 14:51:30 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/711116259' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 14:51:30 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/711116259' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 14:51:30 np0005549633 jovial_ardinghelli[86902]: 
Dec  7 14:51:30 np0005549633 jovial_ardinghelli[86902]: [global]
Dec  7 14:51:30 np0005549633 jovial_ardinghelli[86902]: #011fsid = a8ac706f-8288-541e-8e56-e1124d9b483d
Dec  7 14:51:30 np0005549633 jovial_ardinghelli[86902]: #011mon_host = 192.168.122.100
Dec  7 14:51:30 np0005549633 systemd[1]: libpod-aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c.scope: Deactivated successfully.
Dec  7 14:51:30 np0005549633 podman[86887]: 2025-12-07 19:51:30.977292098 +0000 UTC m=+0.610378230 container died aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c (image=quay.io/ceph/ceph:v19, name=jovial_ardinghelli, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:31 np0005549633 systemd[1]: var-lib-containers-storage-overlay-c51081a11529fe7c695417e8c607441c4e96d5305e7a5cbac8bf75fa59705071-merged.mount: Deactivated successfully.
Dec  7 14:51:31 np0005549633 podman[86887]: 2025-12-07 19:51:31.011146235 +0000 UTC m=+0.644232367 container remove aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c (image=quay.io/ceph/ceph:v19, name=jovial_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:31 np0005549633 systemd[1]: libpod-conmon-aa26a5fcffe255721e42c9d3d45dff2528f124e1c827f97a2f8bc56529d3ba2c.scope: Deactivated successfully.
Dec  7 14:51:31 np0005549633 python3[86963]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:31 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/711116259' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  7 14:51:31 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/711116259' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  7 14:51:31 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  7 14:51:31 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  7 14:51:31 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:51:31 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:51:31 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec  7 14:51:31 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec  7 14:51:31 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v104: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:31 np0005549633 podman[86964]: 2025-12-07 19:51:31.53608838 +0000 UTC m=+0.102699644 container create d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18 (image=quay.io/ceph/ceph:v19, name=determined_nash, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:31 np0005549633 systemd[1]: Started libpod-conmon-d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18.scope.
Dec  7 14:51:31 np0005549633 podman[86964]: 2025-12-07 19:51:31.507602699 +0000 UTC m=+0.074214003 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:31 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:31 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3eb6cba3333c79fb70c6b26115da2b27a9181b1e0670294cab8032a0ba5d40/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:31 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3eb6cba3333c79fb70c6b26115da2b27a9181b1e0670294cab8032a0ba5d40/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:31 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be3eb6cba3333c79fb70c6b26115da2b27a9181b1e0670294cab8032a0ba5d40/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:31 np0005549633 podman[86964]: 2025-12-07 19:51:31.655325649 +0000 UTC m=+0.221936933 container init d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18 (image=quay.io/ceph/ceph:v19, name=determined_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  7 14:51:31 np0005549633 podman[86964]: 2025-12-07 19:51:31.666036429 +0000 UTC m=+0.232647653 container start d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18 (image=quay.io/ceph/ceph:v19, name=determined_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  7 14:51:31 np0005549633 podman[86964]: 2025-12-07 19:51:31.669824541 +0000 UTC m=+0.236435855 container attach d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18 (image=quay.io/ceph/ceph:v19, name=determined_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 14:51:31 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  7 14:51:31 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  7 14:51:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec  7 14:51:32 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/261965668' entity='client.admin' 
Dec  7 14:51:32 np0005549633 determined_nash[86979]: set ssl_option
Dec  7 14:51:32 np0005549633 systemd[1]: libpod-d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18.scope: Deactivated successfully.
Dec  7 14:51:32 np0005549633 podman[86964]: 2025-12-07 19:51:32.211633138 +0000 UTC m=+0.778244372 container died d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18 (image=quay.io/ceph/ceph:v19, name=determined_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:32 np0005549633 systemd[1]: var-lib-containers-storage-overlay-be3eb6cba3333c79fb70c6b26115da2b27a9181b1e0670294cab8032a0ba5d40-merged.mount: Deactivated successfully.
Dec  7 14:51:32 np0005549633 podman[86964]: 2025-12-07 19:51:32.253720792 +0000 UTC m=+0.820332016 container remove d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18 (image=quay.io/ceph/ceph:v19, name=determined_nash, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:32 np0005549633 systemd[1]: libpod-conmon-d60eeb04bb3fbb79ebf963ac819b24670f7ec55cb2dfddb1cb5686716f14ce18.scope: Deactivated successfully.
Dec  7 14:51:32 np0005549633 python3[87042]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:32 np0005549633 podman[87043]: 2025-12-07 19:51:32.809726244 +0000 UTC m=+0.057585587 container create 1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:32 np0005549633 systemd[1]: Started libpod-conmon-1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470.scope.
Dec  7 14:51:32 np0005549633 podman[87043]: 2025-12-07 19:51:32.784174526 +0000 UTC m=+0.032033939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:32 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Dec  7 14:51:32 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:32 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Dec  7 14:51:32 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ff7b759cc625b12527fd9a3144bc9c65405052ce8f0829b27a07b8c24d5ed2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:32 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ff7b759cc625b12527fd9a3144bc9c65405052ce8f0829b27a07b8c24d5ed2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:32 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8ff7b759cc625b12527fd9a3144bc9c65405052ce8f0829b27a07b8c24d5ed2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:32 np0005549633 podman[87043]: 2025-12-07 19:51:32.904392105 +0000 UTC m=+0.152251448 container init 1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:32 np0005549633 podman[87043]: 2025-12-07 19:51:32.915807631 +0000 UTC m=+0.163666954 container start 1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:32 np0005549633 podman[87043]: 2025-12-07 19:51:32.919640082 +0000 UTC m=+0.167499455 container attach 1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Dec  7 14:51:32 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  7 14:51:32 np0005549633 ceph-mon[74384]: Deploying daemon osd.2 on compute-2
Dec  7 14:51:32 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/261965668' entity='client.admin' 
Dec  7 14:51:33 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v105: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:33 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14271 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:51:33 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 14:51:33 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 14:51:33 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 14:51:33 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:33 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec  7 14:51:33 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec  7 14:51:33 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 14:51:33 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:33 np0005549633 intelligent_jackson[87058]: Scheduled rgw.rgw update...
Dec  7 14:51:33 np0005549633 intelligent_jackson[87058]: Scheduled ingress.rgw.default update...
Dec  7 14:51:33 np0005549633 systemd[1]: libpod-1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470.scope: Deactivated successfully.
Dec  7 14:51:33 np0005549633 podman[87043]: 2025-12-07 19:51:33.736493243 +0000 UTC m=+0.984352606 container died 1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:33 np0005549633 systemd[1]: var-lib-containers-storage-overlay-c8ff7b759cc625b12527fd9a3144bc9c65405052ce8f0829b27a07b8c24d5ed2-merged.mount: Deactivated successfully.
Dec  7 14:51:33 np0005549633 podman[87043]: 2025-12-07 19:51:33.782931569 +0000 UTC m=+1.030790912 container remove 1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470 (image=quay.io/ceph/ceph:v19, name=intelligent_jackson, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:33 np0005549633 systemd[1]: libpod-conmon-1dd8319060963f93ffd3517fa67abd464fe03fbfeeb96aa46082611dcfc98470.scope: Deactivated successfully.
Dec  7 14:51:33 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  7 14:51:33 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  7 14:51:34 np0005549633 ceph-mon[74384]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 14:51:34 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:34 np0005549633 ceph-mon[74384]: Saving service ingress.rgw.default spec with placement count:2
Dec  7 14:51:34 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:34 np0005549633 python3[87170]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:51:34 np0005549633 python3[87241]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765137093.9852319-37292-154141655078706/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:51:34 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  7 14:51:34 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  7 14:51:35 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v106: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:35 np0005549633 python3[87291]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:35 np0005549633 podman[87292]: 2025-12-07 19:51:35.609430937 +0000 UTC m=+0.063369461 container create 0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57 (image=quay.io/ceph/ceph:v19, name=mystifying_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 14:51:35 np0005549633 systemd[1]: Started libpod-conmon-0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57.scope.
Dec  7 14:51:35 np0005549633 podman[87292]: 2025-12-07 19:51:35.577433561 +0000 UTC m=+0.031372125 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:35 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7daac46b3d8ec626aabbba4309f335b072c16e1567c45a929f96f2d32b533f5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7daac46b3d8ec626aabbba4309f335b072c16e1567c45a929f96f2d32b533f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:35 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7daac46b3d8ec626aabbba4309f335b072c16e1567c45a929f96f2d32b533f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:35 np0005549633 podman[87292]: 2025-12-07 19:51:35.711324264 +0000 UTC m=+0.165262828 container init 0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57 (image=quay.io/ceph/ceph:v19, name=mystifying_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 14:51:35 np0005549633 podman[87292]: 2025-12-07 19:51:35.720905169 +0000 UTC m=+0.174843703 container start 0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57 (image=quay.io/ceph/ceph:v19, name=mystifying_colden, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:35 np0005549633 podman[87292]: 2025-12-07 19:51:35.72513153 +0000 UTC m=+0.179070064 container attach 0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57 (image=quay.io/ceph/ceph:v19, name=mystifying_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 14:51:35 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  7 14:51:35 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14277 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec  7 14:51:36 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  7 14:51:36 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:36 np0005549633 mystifying_colden[87308]: Scheduled node-exporter update...
Dec  7 14:51:36 np0005549633 mystifying_colden[87308]: Scheduled grafana update...
Dec  7 14:51:36 np0005549633 mystifying_colden[87308]: Scheduled prometheus update...
Dec  7 14:51:36 np0005549633 mystifying_colden[87308]: Scheduled alertmanager update...
Dec  7 14:51:36 np0005549633 systemd[1]: libpod-0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57.scope: Deactivated successfully.
Dec  7 14:51:36 np0005549633 podman[87292]: 2025-12-07 19:51:36.330665625 +0000 UTC m=+0.784604189 container died 0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57 (image=quay.io/ceph/ceph:v19, name=mystifying_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 14:51:36 np0005549633 systemd[1]: var-lib-containers-storage-overlay-e7daac46b3d8ec626aabbba4309f335b072c16e1567c45a929f96f2d32b533f5-merged.mount: Deactivated successfully.
Dec  7 14:51:36 np0005549633 podman[87292]: 2025-12-07 19:51:36.378629605 +0000 UTC m=+0.832568089 container remove 0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57 (image=quay.io/ceph/ceph:v19, name=mystifying_colden, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 14:51:36 np0005549633 systemd[1]: libpod-conmon-0df1326308e1b2f324c1ea38c58e15ee925eee196fe52cb7bc0503eb7f2e7a57.scope: Deactivated successfully.
Dec  7 14:51:36 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.8 deep-scrub starts
Dec  7 14:51:36 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.8 deep-scrub ok
Dec  7 14:51:37 np0005549633 python3[87369]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: Saving service node-exporter spec with placement *
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: Saving service grafana spec with placement compute-0;count:1
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: Saving service prometheus spec with placement compute-0;count:1
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: Saving service alertmanager spec with placement compute-0;count:1
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:37 np0005549633 podman[87370]: 2025-12-07 19:51:37.090968572 +0000 UTC m=+0.053397558 container create a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38 (image=quay.io/ceph/ceph:v19, name=cranky_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:37 np0005549633 systemd[1]: Started libpod-conmon-a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38.scope.
Dec  7 14:51:37 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:37 np0005549633 podman[87370]: 2025-12-07 19:51:37.064355181 +0000 UTC m=+0.026784157 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119ee90234993befc056fa8c4ff28f1d98104fe63207151d95648c3f772568a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119ee90234993befc056fa8c4ff28f1d98104fe63207151d95648c3f772568a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:37 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/119ee90234993befc056fa8c4ff28f1d98104fe63207151d95648c3f772568a3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:37 np0005549633 podman[87370]: 2025-12-07 19:51:37.189988777 +0000 UTC m=+0.152417753 container init a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38 (image=quay.io/ceph/ceph:v19, name=cranky_franklin, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:37 np0005549633 podman[87370]: 2025-12-07 19:51:37.200958082 +0000 UTC m=+0.163387038 container start a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38 (image=quay.io/ceph/ceph:v19, name=cranky_franklin, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:37 np0005549633 podman[87370]: 2025-12-07 19:51:37.205080311 +0000 UTC m=+0.167509287 container attach a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38 (image=quay.io/ceph/ceph:v19, name=cranky_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:37 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v107: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/808491534' entity='client.admin' 
Dec  7 14:51:37 np0005549633 systemd[1]: libpod-a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38.scope: Deactivated successfully.
Dec  7 14:51:37 np0005549633 podman[87370]: 2025-12-07 19:51:37.641249272 +0000 UTC m=+0.603678218 container died a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38 (image=quay.io/ceph/ceph:v19, name=cranky_franklin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:37 np0005549633 systemd[1]: var-lib-containers-storage-overlay-119ee90234993befc056fa8c4ff28f1d98104fe63207151d95648c3f772568a3-merged.mount: Deactivated successfully.
Dec  7 14:51:37 np0005549633 podman[87370]: 2025-12-07 19:51:37.681786801 +0000 UTC m=+0.644215757 container remove a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38 (image=quay.io/ceph/ceph:v19, name=cranky_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 14:51:37 np0005549633 systemd[1]: libpod-conmon-a17c8c302dc649d5af3a34b6c19e03a9a760b05467eae51cf9a7bfff30726b38.scope: Deactivated successfully.
Dec  7 14:51:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:37 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec  7 14:51:37 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec  7 14:51:38 np0005549633 python3[87448]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:38 np0005549633 podman[87449]: 2025-12-07 19:51:38.156992349 +0000 UTC m=+0.068459040 container create 270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609 (image=quay.io/ceph/ceph:v19, name=magical_euler, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 14:51:38 np0005549633 systemd[1]: Started libpod-conmon-270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609.scope.
Dec  7 14:51:38 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:38 np0005549633 podman[87449]: 2025-12-07 19:51:38.13137733 +0000 UTC m=+0.042844041 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:38 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f51031d5eefe5e45fe41e25096f813a06bdc538ca53fd69cd56710ff5d27e09/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:38 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f51031d5eefe5e45fe41e25096f813a06bdc538ca53fd69cd56710ff5d27e09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:38 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f51031d5eefe5e45fe41e25096f813a06bdc538ca53fd69cd56710ff5d27e09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:38 np0005549633 podman[87449]: 2025-12-07 19:51:38.477849985 +0000 UTC m=+0.389316736 container init 270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609 (image=quay.io/ceph/ceph:v19, name=magical_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 14:51:38 np0005549633 podman[87449]: 2025-12-07 19:51:38.489731141 +0000 UTC m=+0.401197802 container start 270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609 (image=quay.io/ceph/ceph:v19, name=magical_euler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  7 14:51:38 np0005549633 podman[87449]: 2025-12-07 19:51:38.494232127 +0000 UTC m=+0.405698868 container attach 270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609 (image=quay.io/ceph/ceph:v19, name=magical_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:38 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/808491534' entity='client.admin' 
Dec  7 14:51:38 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  7 14:51:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec  7 14:51:38 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  7 14:51:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3284658908' entity='client.admin' 
Dec  7 14:51:38 np0005549633 systemd[1]: libpod-270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609.scope: Deactivated successfully.
Dec  7 14:51:38 np0005549633 podman[87449]: 2025-12-07 19:51:38.915857725 +0000 UTC m=+0.827324466 container died 270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609 (image=quay.io/ceph/ceph:v19, name=magical_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 14:51:38 np0005549633 systemd[1]: var-lib-containers-storage-overlay-4f51031d5eefe5e45fe41e25096f813a06bdc538ca53fd69cd56710ff5d27e09-merged.mount: Deactivated successfully.
Dec  7 14:51:38 np0005549633 podman[87449]: 2025-12-07 19:51:38.965758696 +0000 UTC m=+0.877225357 container remove 270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609 (image=quay.io/ceph/ceph:v19, name=magical_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:38 np0005549633 systemd[1]: libpod-conmon-270fb466e7c3c0cecb8e952930b592831370134fcf401a2939df280b36632609.scope: Deactivated successfully.
Dec  7 14:51:39 np0005549633 python3[87524]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:39 np0005549633 podman[87525]: 2025-12-07 19:51:39.487087064 +0000 UTC m=+0.072500797 container create 1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf (image=quay.io/ceph/ceph:v19, name=crazy_shtern, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:39 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v108: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:39 np0005549633 systemd[1]: Started libpod-conmon-1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf.scope.
Dec  7 14:51:39 np0005549633 podman[87525]: 2025-12-07 19:51:39.455747131 +0000 UTC m=+0.041160874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:39 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/251742baaaef4979c91e0e1f18eddbbdcb737a7306bb51923080c9c29c7df35f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/251742baaaef4979c91e0e1f18eddbbdcb737a7306bb51923080c9c29c7df35f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/251742baaaef4979c91e0e1f18eddbbdcb737a7306bb51923080c9c29c7df35f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:39 np0005549633 podman[87525]: 2025-12-07 19:51:39.578149228 +0000 UTC m=+0.163563011 container init 1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf (image=quay.io/ceph/ceph:v19, name=crazy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:39 np0005549633 podman[87525]: 2025-12-07 19:51:39.590388621 +0000 UTC m=+0.175802354 container start 1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf (image=quay.io/ceph/ceph:v19, name=crazy_shtern, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:39 np0005549633 podman[87525]: 2025-12-07 19:51:39.594376456 +0000 UTC m=+0.179790249 container attach 1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf (image=quay.io/ceph/ceph:v19, name=crazy_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3284658908' entity='client.admin' 
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:39 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  7 14:51:39 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  7 14:51:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec  7 14:51:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3046413619' entity='client.admin' 
Dec  7 14:51:40 np0005549633 systemd[1]: libpod-1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf.scope: Deactivated successfully.
Dec  7 14:51:40 np0005549633 podman[87525]: 2025-12-07 19:51:40.028098284 +0000 UTC m=+0.613512007 container died 1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf (image=quay.io/ceph/ceph:v19, name=crazy_shtern, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  7 14:51:40 np0005549633 systemd[1]: var-lib-containers-storage-overlay-251742baaaef4979c91e0e1f18eddbbdcb737a7306bb51923080c9c29c7df35f-merged.mount: Deactivated successfully.
Dec  7 14:51:40 np0005549633 podman[87525]: 2025-12-07 19:51:40.076765539 +0000 UTC m=+0.662179232 container remove 1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf (image=quay.io/ceph/ceph:v19, name=crazy_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  7 14:51:40 np0005549633 systemd[1]: libpod-conmon-1b7635521b51cf0780f236433f70e9c012295b57549b80f8628bb530922977cf.scope: Deactivated successfully.
Dec  7 14:51:40 np0005549633 python3[87602]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:40 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec  7 14:51:40 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec  7 14:51:41 np0005549633 python3[87639]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.dyzcyj/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:41 np0005549633 podman[87640]: 2025-12-07 19:51:41.401399516 +0000 UTC m=+0.076509233 container create 50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8 (image=quay.io/ceph/ceph:v19, name=nervous_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  7 14:51:41 np0005549633 systemd[1]: Started libpod-conmon-50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8.scope.
Dec  7 14:51:41 np0005549633 podman[87640]: 2025-12-07 19:51:41.370051964 +0000 UTC m=+0.045161741 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:41 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v109: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:41 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c52736e26ad7c4c7405726c2da4886d8b62a5cf43cd706341f4fa42fbd1692/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c52736e26ad7c4c7405726c2da4886d8b62a5cf43cd706341f4fa42fbd1692/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c52736e26ad7c4c7405726c2da4886d8b62a5cf43cd706341f4fa42fbd1692/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:41 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3046413619' entity='client.admin' 
Dec  7 14:51:41 np0005549633 podman[87640]: 2025-12-07 19:51:41.543332742 +0000 UTC m=+0.218442489 container init 50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8 (image=quay.io/ceph/ceph:v19, name=nervous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:41 np0005549633 podman[87640]: 2025-12-07 19:51:41.556509575 +0000 UTC m=+0.231619302 container start 50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8 (image=quay.io/ceph/ceph:v19, name=nervous_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  7 14:51:41 np0005549633 podman[87640]: 2025-12-07 19:51:41.560815867 +0000 UTC m=+0.235925624 container attach 50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8 (image=quay.io/ceph/ceph:v19, name=nervous_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  7 14:51:41 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  7 14:51:41 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  7 14:51:41 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.dyzcyj/server_addr}] v 0)
Dec  7 14:51:41 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3969554193' entity='client.admin' 
Dec  7 14:51:41 np0005549633 systemd[1]: libpod-50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8.scope: Deactivated successfully.
Dec  7 14:51:41 np0005549633 podman[87640]: 2025-12-07 19:51:41.997726104 +0000 UTC m=+0.672835851 container died 50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8 (image=quay.io/ceph/ceph:v19, name=nervous_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:42 np0005549633 systemd[1]: var-lib-containers-storage-overlay-83c52736e26ad7c4c7405726c2da4886d8b62a5cf43cd706341f4fa42fbd1692-merged.mount: Deactivated successfully.
Dec  7 14:51:42 np0005549633 podman[87640]: 2025-12-07 19:51:42.058257433 +0000 UTC m=+0.733367140 container remove 50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8 (image=quay.io/ceph/ceph:v19, name=nervous_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:42 np0005549633 systemd[1]: libpod-conmon-50b33951bbb2a0656a20ad330c0880c22f46159f5767a2ac3c20294e7660dab8.scope: Deactivated successfully.
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3969554193' entity='client.admin' 
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:42 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec  7 14:51:42 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:42 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  7 14:51:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e34 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec  7 14:51:43 np0005549633 python3[87718]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.cgejnh/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:43 np0005549633 podman[87744]: 2025-12-07 19:51:43.14152066 +0000 UTC m=+0.076121284 container create 5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184 (image=quay.io/ceph/ceph:v19, name=nifty_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:43 np0005549633 systemd[1]: Started libpod-conmon-5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184.scope.
Dec  7 14:51:43 np0005549633 podman[87744]: 2025-12-07 19:51:43.109096055 +0000 UTC m=+0.043696719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:43 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99adb2e220b23c10c008ee025cbff4d5884ad5a04873ba5d22a6cdb1f493394b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99adb2e220b23c10c008ee025cbff4d5884ad5a04873ba5d22a6cdb1f493394b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99adb2e220b23c10c008ee025cbff4d5884ad5a04873ba5d22a6cdb1f493394b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:43 np0005549633 podman[87744]: 2025-12-07 19:51:43.257063139 +0000 UTC m=+0.191663753 container init 5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184 (image=quay.io/ceph/ceph:v19, name=nifty_goldwasser, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:43 np0005549633 podman[87744]: 2025-12-07 19:51:43.26497437 +0000 UTC m=+0.199574984 container start 5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184 (image=quay.io/ceph/ceph:v19, name=nifty_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:43 np0005549633 podman[87744]: 2025-12-07 19:51:43.269698721 +0000 UTC m=+0.204299345 container attach 5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184 (image=quay.io/ceph/ceph:v19, name=nifty_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 14:51:43 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v111: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: from='osd.2 [v2:192.168.122.102:6800/2109095882,v1:192.168.122.102:6801/2109095882]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: from='osd.2 [v2:192.168.122.102:6800/2109095882,v1:192.168.122.102:6801/2109095882]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.cgejnh/server_addr}] v 0)
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3174458199' entity='client.admin' 
Dec  7 14:51:43 np0005549633 systemd[1]: libpod-5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184.scope: Deactivated successfully.
Dec  7 14:51:43 np0005549633 podman[87744]: 2025-12-07 19:51:43.762077148 +0000 UTC m=+0.696677762 container died 5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184 (image=quay.io/ceph/ceph:v19, name=nifty_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 14:51:43 np0005549633 systemd[1]: var-lib-containers-storage-overlay-99adb2e220b23c10c008ee025cbff4d5884ad5a04873ba5d22a6cdb1f493394b-merged.mount: Deactivated successfully.
Dec  7 14:51:43 np0005549633 podman[87744]: 2025-12-07 19:51:43.812219524 +0000 UTC m=+0.746820138 container remove 5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184 (image=quay.io/ceph/ceph:v19, name=nifty_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:51:43 np0005549633 systemd[1]: libpod-conmon-5c2ef3db26094aafd6c50c013aa26d9c5cbe20256a451aa20fd95d8013d7f184.scope: Deactivated successfully.
Dec  7 14:51:43 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  7 14:51:43 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Dec  7 14:51:43 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:44 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:44 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:44 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378420830s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.206565857s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.372322083s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.200508118s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378420830s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.206565857s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.372322083s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.200508118s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.385823250s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 97.214233398s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378081322s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.206573486s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.385823250s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.214233398s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378081322s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.206573486s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378247261s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.206764221s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378247261s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.206764221s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.385087967s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 97.213851929s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377920151s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.206710815s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.385087967s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.213851929s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378213882s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.207115173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.384901047s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 97.213806152s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.384901047s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.213806152s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377789497s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.206779480s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.378213882s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.207115173s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377920151s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.206710815s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377789497s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.206779480s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377912521s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.207084656s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377912521s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.207084656s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.383741379s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 97.213294983s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.383741379s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.213294983s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377656937s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.207283020s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.383509636s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 97.213172913s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377609253s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.207260132s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377656937s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.207283020s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.383509636s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.213172913s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377609253s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.207260132s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.383965492s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 97.213813782s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.383965492s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.213813782s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.317646027s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 97.147544861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377437592s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 97.207344055s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.317646027s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.147544861s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=9.377437592s) [] r=-1 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 97.207344055s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3174458199' entity='client.admin' 
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:51:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:44 np0005549633 python3[87901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.orbdku/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  7 14:51:44 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  7 14:51:45 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:45 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:45 np0005549633 podman[87902]: 2025-12-07 19:51:45.063533678 +0000 UTC m=+0.092895465 container create 31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0 (image=quay.io/ceph/ceph:v19, name=xenodochial_williamson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  7 14:51:45 np0005549633 systemd[1]: Started libpod-conmon-31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0.scope.
Dec  7 14:51:45 np0005549633 podman[87902]: 2025-12-07 19:51:45.03333769 +0000 UTC m=+0.062699557 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:45 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:45 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbe69709095738beeaa747c788456d0de9564cc8be4d0e6b7f272cd98cd0fb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:45 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbe69709095738beeaa747c788456d0de9564cc8be4d0e6b7f272cd98cd0fb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:45 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dbe69709095738beeaa747c788456d0de9564cc8be4d0e6b7f272cd98cd0fb7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:45 np0005549633 podman[87902]: 2025-12-07 19:51:45.181870467 +0000 UTC m=+0.211232304 container init 31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0 (image=quay.io/ceph/ceph:v19, name=xenodochial_williamson, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:45 np0005549633 podman[87902]: 2025-12-07 19:51:45.194386996 +0000 UTC m=+0.223748783 container start 31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0 (image=quay.io/ceph/ceph:v19, name=xenodochial_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 14:51:45 np0005549633 podman[87902]: 2025-12-07 19:51:45.197899372 +0000 UTC m=+0.227261189 container attach 31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0 (image=quay.io/ceph/ceph:v19, name=xenodochial_williamson, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:51:45 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v113: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.orbdku/server_addr}] v 0)
Dec  7 14:51:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1712120553' entity='client.admin' 
Dec  7 14:51:45 np0005549633 systemd[1]: libpod-31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0.scope: Deactivated successfully.
Dec  7 14:51:45 np0005549633 podman[87902]: 2025-12-07 19:51:45.672972357 +0000 UTC m=+0.702334144 container died 31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0 (image=quay.io/ceph/ceph:v19, name=xenodochial_williamson, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:45 np0005549633 systemd[1]: var-lib-containers-storage-overlay-7dbe69709095738beeaa747c788456d0de9564cc8be4d0e6b7f272cd98cd0fb7-merged.mount: Deactivated successfully.
Dec  7 14:51:45 np0005549633 podman[87902]: 2025-12-07 19:51:45.719834602 +0000 UTC m=+0.749196389 container remove 31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0 (image=quay.io/ceph/ceph:v19, name=xenodochial_williamson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:51:45 np0005549633 systemd[1]: libpod-conmon-31326a44c870fbcc9d64dffede74e11553b259f921151579ce79f5c01768c2d0.scope: Deactivated successfully.
Dec  7 14:51:45 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:45 np0005549633 ceph-mon[74384]: from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:51:45 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1712120553' entity='client.admin' 
Dec  7 14:51:45 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Dec  7 14:51:45 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Dec  7 14:51:46 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:46 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:46 np0005549633 python3[87980]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:46 np0005549633 podman[87981]: 2025-12-07 19:51:46.186846315 +0000 UTC m=+0.066790235 container create 4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f (image=quay.io/ceph/ceph:v19, name=blissful_sutherland, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:46 np0005549633 systemd[1]: Started libpod-conmon-4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f.scope.
Dec  7 14:51:46 np0005549633 podman[87981]: 2025-12-07 19:51:46.164162308 +0000 UTC m=+0.044106278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:46 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:46 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c633e329fb5fb4cd88000601e5e9862aac6aaf73779744c404c23e12f5393a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:46 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c633e329fb5fb4cd88000601e5e9862aac6aaf73779744c404c23e12f5393a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:46 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c633e329fb5fb4cd88000601e5e9862aac6aaf73779744c404c23e12f5393a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:46 np0005549633 podman[87981]: 2025-12-07 19:51:46.282395785 +0000 UTC m=+0.162339755 container init 4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f (image=quay.io/ceph/ceph:v19, name=blissful_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:51:46 np0005549633 podman[87981]: 2025-12-07 19:51:46.292314178 +0000 UTC m=+0.172258098 container start 4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f (image=quay.io/ceph/ceph:v19, name=blissful_sutherland, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 14:51:46 np0005549633 podman[87981]: 2025-12-07 19:51:46.296257773 +0000 UTC m=+0.176201693 container attach 4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f (image=quay.io/ceph/ceph:v19, name=blissful_sutherland, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  7 14:51:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  7 14:51:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1129854125' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 14:51:46 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  7 14:51:46 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  7 14:51:47 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:47 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:47 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v114: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1129854125' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 14:51:47 np0005549633 blissful_sutherland[87997]: module 'dashboard' is already disabled
Dec  7 14:51:47 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.dyzcyj(active, since 2m), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:51:47 np0005549633 systemd[1]: libpod-4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f.scope: Deactivated successfully.
Dec  7 14:51:47 np0005549633 podman[87981]: 2025-12-07 19:51:47.9348951 +0000 UTC m=+1.814839050 container died 4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f (image=quay.io/ceph/ceph:v19, name=blissful_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 14:51:47 np0005549633 systemd[1]: var-lib-containers-storage-overlay-4c633e329fb5fb4cd88000601e5e9862aac6aaf73779744c404c23e12f5393a2-merged.mount: Deactivated successfully.
Dec  7 14:51:47 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  7 14:51:47 np0005549633 podman[87981]: 2025-12-07 19:51:47.988651977 +0000 UTC m=+1.868595907 container remove 4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f (image=quay.io/ceph/ceph:v19, name=blissful_sutherland, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 14:51:47 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  7 14:51:48 np0005549633 systemd[1]: libpod-conmon-4c47e13d89f88ddb63fe189eeb16a66b950bca181078997e669d0a047fb36d6f.scope: Deactivated successfully.
Dec  7 14:51:48 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:48 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:48 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:48 np0005549633 python3[88059]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:48 np0005549633 podman[88060]: 2025-12-07 19:51:48.528603766 +0000 UTC m=+0.070216307 container create 42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa (image=quay.io/ceph/ceph:v19, name=pedantic_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:51:48 np0005549633 systemd[1]: Started libpod-conmon-42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa.scope.
Dec  7 14:51:48 np0005549633 podman[88060]: 2025-12-07 19:51:48.505734985 +0000 UTC m=+0.047347626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:48 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b74781e9e8c95a23d14f15ad40da548140f991a38fc66ec1b78e44f1ef88a39/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b74781e9e8c95a23d14f15ad40da548140f991a38fc66ec1b78e44f1ef88a39/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b74781e9e8c95a23d14f15ad40da548140f991a38fc66ec1b78e44f1ef88a39/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:48 np0005549633 podman[88060]: 2025-12-07 19:51:48.642790148 +0000 UTC m=+0.184402709 container init 42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa (image=quay.io/ceph/ceph:v19, name=pedantic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  7 14:51:48 np0005549633 podman[88060]: 2025-12-07 19:51:48.652533848 +0000 UTC m=+0.194146429 container start 42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa (image=quay.io/ceph/ceph:v19, name=pedantic_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 14:51:48 np0005549633 podman[88060]: 2025-12-07 19:51:48.658913489 +0000 UTC m=+0.200526130 container attach 42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa (image=quay.io/ceph/ceph:v19, name=pedantic_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  7 14:51:49 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:49 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:49 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  7 14:51:49 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  7 14:51:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  7 14:51:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1381267678' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 14:51:49 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v115: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:50 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:50 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:50 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec  7 14:51:50 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec  7 14:51:50 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1129854125' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 14:51:51 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:51 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:51 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec  7 14:51:51 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec  7 14:51:51 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v116: 100 pgs: 100 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec  7 14:51:52 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:52 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:52 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:52 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec  7 14:51:52 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec  7 14:51:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2109095882; not ready for session (expect reconnect)
Dec  7 14:51:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:53 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/90497734' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:53 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec  7 14:51:53 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec  7 14:51:53 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1381267678' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  1: '-n'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  2: 'mgr.compute-0.dyzcyj'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  3: '-f'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  4: '--setuser'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  5: 'ceph'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  6: '--setgroup'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  7: 'ceph'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  8: '--default-log-to-file=false'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  9: '--default-log-to-journald=true'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr respawn  exe_path /proc/self/exe
Dec  7 14:51:53 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.dyzcyj(active, since 2m), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:51:53 np0005549633 systemd[1]: libpod-42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 podman[88060]: 2025-12-07 19:51:53.342679536 +0000 UTC m=+4.884292117 container died 42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa (image=quay.io/ceph/ceph:v19, name=pedantic_hamilton, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:51:53 np0005549633 systemd[1]: var-lib-containers-storage-overlay-6b74781e9e8c95a23d14f15ad40da548140f991a38fc66ec1b78e44f1ef88a39-merged.mount: Deactivated successfully.
Dec  7 14:51:53 np0005549633 podman[88060]: 2025-12-07 19:51:53.397233794 +0000 UTC m=+4.938846365 container remove 42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa (image=quay.io/ceph/ceph:v19, name=pedantic_hamilton, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:53 np0005549633 systemd[1]: libpod-conmon-42dc0580374c37990e30694660ad74367dd1022428c3a442353ff66b27f98eaa.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-27.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-30.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-32.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-21.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-26.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-31.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-23.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 27 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd[1]: session-28.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-29.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 21 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 31 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 30 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd[1]: session-33.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd[1]: session-33.scope: Consumed 28.894s CPU time.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 28 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 29 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setuser ceph since I am not root
Dec  7 14:51:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setgroup ceph since I am not root
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 23 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 26 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 32 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 33 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 27.
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: pidfile_write: ignore empty --pid-file
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 30.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 32.
Dec  7 14:51:53 np0005549633 systemd[1]: session-25.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 25 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 21.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 26.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 31.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 23.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 28.
Dec  7 14:51:53 np0005549633 systemd[1]: session-24.scope: Deactivated successfully.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 29.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Session 24 logged out. Waiting for processes to exit.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 33.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 25.
Dec  7 14:51:53 np0005549633 systemd-logind[797]: Removed session 24.
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'alerts'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:51:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:53.608+0000 7ff442d6c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'balancer'
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:51:53 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'cephadm'
Dec  7 14:51:53 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:53.689+0000 7ff442d6c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:51:53 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1129854125' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 14:51:53 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1381267678' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 14:51:53 np0005549633 python3[88161]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:51:54 np0005549633 podman[88162]: 2025-12-07 19:51:54.041090696 +0000 UTC m=+0.075290293 container create 6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760 (image=quay.io/ceph/ceph:v19, name=agitated_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 14:51:54 np0005549633 systemd[1]: Started libpod-conmon-6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760.scope.
Dec  7 14:51:54 np0005549633 podman[88162]: 2025-12-07 19:51:54.018329537 +0000 UTC m=+0.052529214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:51:54 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:51:54 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd9962c0819cc66b3e9c2cc832a592d4d3aec94b32c72fee4e8228acebbb858/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:54 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd9962c0819cc66b3e9c2cc832a592d4d3aec94b32c72fee4e8228acebbb858/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:54 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dd9962c0819cc66b3e9c2cc832a592d4d3aec94b32c72fee4e8228acebbb858/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:51:54 np0005549633 podman[88162]: 2025-12-07 19:51:54.137343727 +0000 UTC m=+0.171543374 container init 6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760 (image=quay.io/ceph/ceph:v19, name=agitated_taussig, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:51:54 np0005549633 podman[88162]: 2025-12-07 19:51:54.146911422 +0000 UTC m=+0.181111019 container start 6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760 (image=quay.io/ceph/ceph:v19, name=agitated_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 14:51:54 np0005549633 podman[88162]: 2025-12-07 19:51:54.153525009 +0000 UTC m=+0.187724646 container attach 6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760 (image=quay.io/ceph/ceph:v19, name=agitated_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:51:54 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'crash'
Dec  7 14:51:54 np0005549633 ceph-mgr[74680]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:51:54 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'dashboard'
Dec  7 14:51:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:54.514+0000 7ff442d6c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'devicehealth'
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 14:51:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:55.103+0000 7ff442d6c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 14:51:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 14:51:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  from numpy import show_config as show_numpy_config
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:55.262+0000 7ff442d6c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'influx'
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'insights'
Dec  7 14:51:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:55.327+0000 7ff442d6c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'iostat'
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:55.453+0000 7ff442d6c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'k8sevents'
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'localpool'
Dec  7 14:51:55 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mirroring'
Dec  7 14:51:56 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/1381267678' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'nfs'
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'orchestrator'
Dec  7 14:51:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:56.507+0000 7ff442d6c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:56.728+0000 7ff442d6c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:56.814+0000 7ff442d6c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_support'
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:56.889+0000 7ff442d6c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:51:56 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'progress'
Dec  7 14:51:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:56.973+0000 7ff442d6c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:51:57 np0005549633 ceph-mgr[74680]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:51:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:57.047+0000 7ff442d6c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:51:57 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'prometheus'
Dec  7 14:51:57 np0005549633 ceph-mgr[74680]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:51:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:57.420+0000 7ff442d6c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:51:57 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rbd_support'
Dec  7 14:51:57 np0005549633 ceph-mgr[74680]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:51:57 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'restful'
Dec  7 14:51:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:57.538+0000 7ff442d6c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:51:57 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rgw'
Dec  7 14:51:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:58.013+0000 7ff442d6c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rook'
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:58.621+0000 7ff442d6c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'selftest'
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:58.698+0000 7ff442d6c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'snap_schedule'
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'stats'
Dec  7 14:51:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:58.783+0000 7ff442d6c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'status'
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:58.938+0000 7ff442d6c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:51:58 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telegraf'
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:59.009+0000 7ff442d6c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telemetry'
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:59.176+0000 7ff442d6c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:59.415+0000 7ff442d6c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'volumes'
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh restarted
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh started
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.dyzcyj(active, since 3m), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:59.722+0000 7ff442d6c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'zabbix'
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:51:59.809+0000 7ff442d6c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: ms_deliver_dispatch: unhandled message 0x555671e99860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dyzcyj
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map Activating!
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.dyzcyj(active, starting, since 0.129164s), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map I am now activating
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.orbdku", "id": "compute-2.orbdku"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.orbdku", "id": "compute-2.orbdku"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.cgejnh", "id": "compute-1.cgejnh"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.cgejnh", "id": "compute-1.cgejnh"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr load_all_metadata Skipping incomplete metadata entry
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: balancer
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: [balancer INFO root] Starting
Dec  7 14:51:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Manager daemon compute-0.dyzcyj is now available
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:51:59
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:51:59 np0005549633 ceph-mgr[74680]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: cephadm
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: crash
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: dashboard
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: devicehealth
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO sso] Loading SSO DB version=1
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Starting
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: iostat
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: nfs
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: orchestrator
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: pg_autoscaler
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: progress
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [progress INFO root] Loading...
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7ff3bf0da250>, <progress.module.GhostEvent object at 0x7ff3bf0da280>, <progress.module.GhostEvent object at 0x7ff3bf0da2b0>, <progress.module.GhostEvent object at 0x7ff3bf0da2e0>, <progress.module.GhostEvent object at 0x7ff3bf0da310>, <progress.module.GhostEvent object at 0x7ff3bf0da340>, <progress.module.GhostEvent object at 0x7ff3bf0da370>, <progress.module.GhostEvent object at 0x7ff3bf0da3a0>, <progress.module.GhostEvent object at 0x7ff3bf0da3d0>, <progress.module.GhostEvent object at 0x7ff3bf0da400>] historic events
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] recovery thread starting
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] starting setup
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: rbd_support
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"} v 0)
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: restful
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: status
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: telemetry
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [restful WARNING root] server not running: no certificate configured
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: volumes
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  7 14:52:00 np0005549633 systemd-logind[797]: New session 34 of user ceph-admin.
Dec  7 14:52:00 np0005549633 systemd[1]: Started Session 34 of User ceph-admin.
Dec  7 14:52:00 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.module] Engine started.
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: Activating manager daemon compute-0.dyzcyj
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: Manager daemon compute-0.dyzcyj is now available
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2109095882,v1:192.168.122.102:6801/2109095882] boot
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.1a( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.1f( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:52:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.11( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.e( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.8( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.f( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=29/30 n=0 ec=24/18 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[3.1d( empty local-lis/les=29/30 n=0 ec=22/16 lis/c=29/29 les/c/f=30/30/0 sis=37) [2] r=-1 lpr=37 pi=[29,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:52:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 37 pg[2.1d( empty local-lis/les=21/22 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=37) [2] r=-1 lpr=37 pi=[21,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:52:01 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14343 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.dyzcyj(active, since 1.18564s), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec  7 14:52:01 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v4: 100 pgs: 18 peering, 64 active+clean, 18 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:01 np0005549633 agitated_taussig[88186]: Option GRAFANA_API_USERNAME updated
Dec  7 14:52:01 np0005549633 systemd[1]: libpod-6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760.scope: Deactivated successfully.
Dec  7 14:52:01 np0005549633 podman[88162]: 2025-12-07 19:52:01.053024092 +0000 UTC m=+7.087223699 container died 6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760 (image=quay.io/ceph/ceph:v19, name=agitated_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 14:52:01 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8dd9962c0819cc66b3e9c2cc832a592d4d3aec94b32c72fee4e8228acebbb858-merged.mount: Deactivated successfully.
Dec  7 14:52:01 np0005549633 podman[88162]: 2025-12-07 19:52:01.109326186 +0000 UTC m=+7.143525823 container remove 6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760 (image=quay.io/ceph/ceph:v19, name=agitated_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 14:52:01 np0005549633 systemd[1]: libpod-conmon-6d12dca3bc3018e61f233290c8db16e4e2d45bc7b468283079d3badc052ec760.scope: Deactivated successfully.
Dec  7 14:52:01 np0005549633 podman[88501]: 2025-12-07 19:52:01.500216819 +0000 UTC m=+0.076305500 container exec a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  7 14:52:01 np0005549633 python3[88500]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec  7 14:52:01 np0005549633 podman[88520]: 2025-12-07 19:52:01.5938198 +0000 UTC m=+0.041082650 container create 49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99 (image=quay.io/ceph/ceph:v19, name=dreamy_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:01 np0005549633 podman[88501]: 2025-12-07 19:52:01.619021033 +0000 UTC m=+0.195109704 container exec_died a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  7 14:52:01 np0005549633 systemd[1]: Started libpod-conmon-49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99.scope.
Dec  7 14:52:01 np0005549633 podman[88520]: 2025-12-07 19:52:01.577816832 +0000 UTC m=+0.025079702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:01 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d29523bd878b8169c1d651c5114ece6bdb0a5d4ef4fb43dc6bed11bb6bd10c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d29523bd878b8169c1d651c5114ece6bdb0a5d4ef4fb43dc6bed11bb6bd10c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:01 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84d29523bd878b8169c1d651c5114ece6bdb0a5d4ef4fb43dc6bed11bb6bd10c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:01 np0005549633 podman[88520]: 2025-12-07 19:52:01.700020527 +0000 UTC m=+0.147283457 container init 49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99 (image=quay.io/ceph/ceph:v19, name=dreamy_taussig, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:01 np0005549633 podman[88520]: 2025-12-07 19:52:01.711071112 +0000 UTC m=+0.158333992 container start 49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99 (image=quay.io/ceph/ceph:v19, name=dreamy_taussig, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 14:52:01 np0005549633 podman[88520]: 2025-12-07 19:52:01.715690736 +0000 UTC m=+0.162953676 container attach 49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99 (image=quay.io/ceph/ceph:v19, name=dreamy_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: OSD bench result of 6305.760076 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: osd.2 [v2:192.168.122.102:6800/2109095882,v1:192.168.122.102:6801/2109095882] boot
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:01 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v5: 100 pgs: 18 peering, 64 active+clean, 18 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec  7 14:52:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.dyzcyj(active, since 2s), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Check health
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14367 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:02 np0005549633 dreamy_taussig[88536]: Option GRAFANA_API_PASSWORD updated
Dec  7 14:52:02 np0005549633 systemd[1]: libpod-49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99.scope: Deactivated successfully.
Dec  7 14:52:02 np0005549633 podman[88520]: 2025-12-07 19:52:02.173190489 +0000 UTC m=+0.620453379 container died 49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99 (image=quay.io/ceph/ceph:v19, name=dreamy_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:02 np0005549633 systemd[1]: var-lib-containers-storage-overlay-84d29523bd878b8169c1d651c5114ece6bdb0a5d4ef4fb43dc6bed11bb6bd10c-merged.mount: Deactivated successfully.
Dec  7 14:52:02 np0005549633 podman[88520]: 2025-12-07 19:52:02.229661657 +0000 UTC m=+0.676924517 container remove 49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99 (image=quay.io/ceph/ceph:v19, name=dreamy_taussig, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:02 np0005549633 systemd[1]: libpod-conmon-49296ee91589bbf9cc91cee231ed57051ead260b8655969cf9de6305420bac99.scope: Deactivated successfully.
Dec  7 14:52:02 np0005549633 python3[88739]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku restarted
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku started
Dec  7 14:52:02 np0005549633 podman[88745]: 2025-12-07 19:52:02.753973644 +0000 UTC m=+0.066474146 container create 31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572 (image=quay.io/ceph/ceph:v19, name=reverent_cartwright, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 14:52:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:02] ENGINE Bus STARTING
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:02] ENGINE Bus STARTING
Dec  7 14:52:02 np0005549633 systemd[1]: Started libpod-conmon-31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572.scope.
Dec  7 14:52:02 np0005549633 podman[88745]: 2025-12-07 19:52:02.727980081 +0000 UTC m=+0.040480603 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:02 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bef773fe73d6224b15001e759f0097f10486b559dc08acc2c27fda5a9da736e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bef773fe73d6224b15001e759f0097f10486b559dc08acc2c27fda5a9da736e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bef773fe73d6224b15001e759f0097f10486b559dc08acc2c27fda5a9da736e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:02 np0005549633 podman[88745]: 2025-12-07 19:52:02.847330749 +0000 UTC m=+0.159831341 container init 31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572 (image=quay.io/ceph/ceph:v19, name=reverent_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:02 np0005549633 podman[88745]: 2025-12-07 19:52:02.855760284 +0000 UTC m=+0.168260796 container start 31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572 (image=quay.io/ceph/ceph:v19, name=reverent_cartwright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  7 14:52:02 np0005549633 podman[88745]: 2025-12-07 19:52:02.860624485 +0000 UTC m=+0.173125027 container attach 31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572 (image=quay.io/ceph/ceph:v19, name=reverent_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:02] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:02] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:02] ENGINE Client ('192.168.122.100', 40402) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:52:02 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:02] ENGINE Client ('192.168.122.100', 40402) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:03] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:03] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:03] ENGINE Bus STARTED
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:03] ENGINE Bus STARTED
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.dyzcyj(active, since 3s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14379 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 reverent_cartwright[88773]: Option ALERTMANAGER_API_HOST updated
Dec  7 14:52:03 np0005549633 systemd[1]: libpod-31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572.scope: Deactivated successfully.
Dec  7 14:52:03 np0005549633 podman[88745]: 2025-12-07 19:52:03.238786688 +0000 UTC m=+0.551287220 container died 31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572 (image=quay.io/ceph/ceph:v19, name=reverent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:03 np0005549633 systemd[1]: var-lib-containers-storage-overlay-4bef773fe73d6224b15001e759f0097f10486b559dc08acc2c27fda5a9da736e-merged.mount: Deactivated successfully.
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:03 np0005549633 podman[88745]: 2025-12-07 19:52:03.372979042 +0000 UTC m=+0.685479554 container remove 31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572 (image=quay.io/ceph/ceph:v19, name=reverent_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 14:52:03 np0005549633 systemd[1]: libpod-conmon-31f278400dab9e34b8e08cf2bef8395299c53a69c6ab0003bfd1cf6d6be62572.scope: Deactivated successfully.
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 14:52:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:03 np0005549633 python3[88926]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:03 np0005549633 podman[88927]: 2025-12-07 19:52:03.890854908 +0000 UTC m=+0.077207563 container create 1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8 (image=quay.io/ceph/ceph:v19, name=vibrant_turing, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:03 np0005549633 systemd[1]: Started libpod-conmon-1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8.scope.
Dec  7 14:52:03 np0005549633 podman[88927]: 2025-12-07 19:52:03.860007664 +0000 UTC m=+0.046360359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:03 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v7: 100 pgs: 18 peering, 64 active+clean, 18 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:03 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:03 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5c6d99080946df42993562c07f46d07276966d534ca51638c3c6e793972d19/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:03 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5c6d99080946df42993562c07f46d07276966d534ca51638c3c6e793972d19/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:03 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5c6d99080946df42993562c07f46d07276966d534ca51638c3c6e793972d19/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:03 np0005549633 podman[88927]: 2025-12-07 19:52:03.997593281 +0000 UTC m=+0.183945986 container init 1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8 (image=quay.io/ceph/ceph:v19, name=vibrant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 14:52:04 np0005549633 podman[88927]: 2025-12-07 19:52:04.010046083 +0000 UTC m=+0.196398738 container start 1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8 (image=quay.io/ceph/ceph:v19, name=vibrant_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:04 np0005549633 podman[88927]: 2025-12-07 19:52:04.018138179 +0000 UTC m=+0.204490954 container attach 1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8 (image=quay.io/ceph/ceph:v19, name=vibrant_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:02] ENGINE Bus STARTING
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:02] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:02] ENGINE Client ('192.168.122.100', 40402) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:03] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:03] ENGINE Bus STARTED
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:04 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:04 np0005549633 vibrant_turing[88943]: Option PROMETHEUS_API_HOST updated
Dec  7 14:52:04 np0005549633 systemd[1]: libpod-1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8.scope: Deactivated successfully.
Dec  7 14:52:04 np0005549633 podman[88927]: 2025-12-07 19:52:04.434655358 +0000 UTC m=+0.621008003 container died 1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8 (image=quay.io/ceph/ceph:v19, name=vibrant_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 14:52:04 np0005549633 systemd[1]: var-lib-containers-storage-overlay-ac5c6d99080946df42993562c07f46d07276966d534ca51638c3c6e793972d19-merged.mount: Deactivated successfully.
Dec  7 14:52:04 np0005549633 podman[88927]: 2025-12-07 19:52:04.499776187 +0000 UTC m=+0.686128842 container remove 1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8 (image=quay.io/ceph/ceph:v19, name=vibrant_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 14:52:04 np0005549633 systemd[1]: libpod-conmon-1a9979675b80a4386bb81ead8b5a160d975df18c90bd5cf8ce376be1dca6e4f8.scope: Deactivated successfully.
Dec  7 14:52:04 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.dyzcyj(active, since 4s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:04 np0005549633 python3[89004]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:04 np0005549633 podman[89005]: 2025-12-07 19:52:04.971176512 +0000 UTC m=+0.070918557 container create c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170 (image=quay.io/ceph/ceph:v19, name=eager_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:05 np0005549633 systemd[1]: Started libpod-conmon-c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170.scope.
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:05 np0005549633 podman[89005]: 2025-12-07 19:52:04.939652499 +0000 UTC m=+0.039394554 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:52:05 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:52:05 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da66eebed1f8d746782f921d595e39a76af17e0eb2bf59d72a947cb2b29b791/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:05 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da66eebed1f8d746782f921d595e39a76af17e0eb2bf59d72a947cb2b29b791/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:05 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2da66eebed1f8d746782f921d595e39a76af17e0eb2bf59d72a947cb2b29b791/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:05 np0005549633 podman[89005]: 2025-12-07 19:52:05.077746129 +0000 UTC m=+0.177488234 container init c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170 (image=quay.io/ceph/ceph:v19, name=eager_liskov, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  7 14:52:05 np0005549633 podman[89005]: 2025-12-07 19:52:05.087807047 +0000 UTC m=+0.187549062 container start c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170 (image=quay.io/ceph/ceph:v19, name=eager_liskov, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:52:05 np0005549633 podman[89005]: 2025-12-07 19:52:05.093235132 +0000 UTC m=+0.192977227 container attach c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170 (image=quay.io/ceph/ceph:v19, name=eager_liskov, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14391 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  7 14:52:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:05 np0005549633 eager_liskov[89021]: Option GRAFANA_API_URL updated
Dec  7 14:52:05 np0005549633 systemd[1]: libpod-c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170.scope: Deactivated successfully.
Dec  7 14:52:05 np0005549633 podman[89005]: 2025-12-07 19:52:05.499290961 +0000 UTC m=+0.599032956 container died c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170 (image=quay.io/ceph/ceph:v19, name=eager_liskov, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:05 np0005549633 systemd[1]: var-lib-containers-storage-overlay-2da66eebed1f8d746782f921d595e39a76af17e0eb2bf59d72a947cb2b29b791-merged.mount: Deactivated successfully.
Dec  7 14:52:05 np0005549633 podman[89005]: 2025-12-07 19:52:05.552199315 +0000 UTC m=+0.651941330 container remove c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170 (image=quay.io/ceph/ceph:v19, name=eager_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  7 14:52:05 np0005549633 systemd[1]: libpod-conmon-c3e1e7b9a5dfceea6b748cab0a6eeae71acf7ed7e0989da1c319ceb061794170.scope: Deactivated successfully.
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:05 np0005549633 python3[89281]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:05 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v8: 100 pgs: 18 peering, 64 active+clean, 18 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:06 np0005549633 podman[89329]: 2025-12-07 19:52:06.142742832 +0000 UTC m=+0.247323739 container create dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d (image=quay.io/ceph/ceph:v19, name=sleepy_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Dec  7 14:52:06 np0005549633 systemd[1]: Started libpod-conmon-dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d.scope.
Dec  7 14:52:06 np0005549633 podman[89329]: 2025-12-07 19:52:06.116641094 +0000 UTC m=+0.221222001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:06 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:06 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce6b6d4100b1820aae1000ecf00b3441e1b0d5f76356d087d7ee7d7a9ca2436/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:06 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce6b6d4100b1820aae1000ecf00b3441e1b0d5f76356d087d7ee7d7a9ca2436/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:06 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce6b6d4100b1820aae1000ecf00b3441e1b0d5f76356d087d7ee7d7a9ca2436/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:06 np0005549633 podman[89329]: 2025-12-07 19:52:06.250214203 +0000 UTC m=+0.354795140 container init dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d (image=quay.io/ceph/ceph:v19, name=sleepy_dhawan, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  7 14:52:06 np0005549633 podman[89329]: 2025-12-07 19:52:06.260502158 +0000 UTC m=+0.365083055 container start dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d (image=quay.io/ceph/ceph:v19, name=sleepy_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  7 14:52:06 np0005549633 podman[89329]: 2025-12-07 19:52:06.265252665 +0000 UTC m=+0.369833672 container attach dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d (image=quay.io/ceph/ceph:v19, name=sleepy_dhawan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: from='mgr.14334 192.168.122.100:0/2957312952' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  7 14:52:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/416825148' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/416825148' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/416825148' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.dyzcyj(active, since 7s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:07 np0005549633 systemd[1]: libpod-dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d.scope: Deactivated successfully.
Dec  7 14:52:07 np0005549633 podman[89817]: 2025-12-07 19:52:07.635062962 +0000 UTC m=+0.035720886 container died dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d (image=quay.io/ceph/ceph:v19, name=sleepy_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec  7 14:52:07 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8ce6b6d4100b1820aae1000ecf00b3441e1b0d5f76356d087d7ee7d7a9ca2436-merged.mount: Deactivated successfully.
Dec  7 14:52:07 np0005549633 podman[89817]: 2025-12-07 19:52:07.680064324 +0000 UTC m=+0.080722198 container remove dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d (image=quay.io/ceph/ceph:v19, name=sleepy_dhawan, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:07 np0005549633 systemd[1]: session-34.scope: Deactivated successfully.
Dec  7 14:52:07 np0005549633 systemd[1]: session-34.scope: Consumed 5.238s CPU time.
Dec  7 14:52:07 np0005549633 systemd-logind[797]: Session 34 logged out. Waiting for processes to exit.
Dec  7 14:52:07 np0005549633 systemd[1]: libpod-conmon-dea1c7421ac2cb8ae271ca71498ecf25c537384aaf29e0ad70f005b0bee1456d.scope: Deactivated successfully.
Dec  7 14:52:07 np0005549633 systemd-logind[797]: Removed session 34.
Dec  7 14:52:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setuser ceph since I am not root
Dec  7 14:52:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setgroup ceph since I am not root
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: pidfile_write: ignore empty --pid-file
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'alerts'
Dec  7 14:52:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:52:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:07.854+0000 7f05a4a96140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'balancer'
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:52:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'cephadm'
Dec  7 14:52:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:07.936+0000 7f05a4a96140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:52:08 np0005549633 python3[89875]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:08 np0005549633 podman[89876]: 2025-12-07 19:52:08.159645087 +0000 UTC m=+0.054231451 container create de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201 (image=quay.io/ceph/ceph:v19, name=naughty_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  7 14:52:08 np0005549633 systemd[1]: Started libpod-conmon-de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201.scope.
Dec  7 14:52:08 np0005549633 podman[89876]: 2025-12-07 19:52:08.135202524 +0000 UTC m=+0.029788868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:08 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:08 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381008810b289b967905e145dcc4deb764a5fca392898485970c5a1c547d0d85/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:08 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381008810b289b967905e145dcc4deb764a5fca392898485970c5a1c547d0d85/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:08 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381008810b289b967905e145dcc4deb764a5fca392898485970c5a1c547d0d85/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:08 np0005549633 podman[89876]: 2025-12-07 19:52:08.264537649 +0000 UTC m=+0.159123993 container init de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201 (image=quay.io/ceph/ceph:v19, name=naughty_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:08 np0005549633 podman[89876]: 2025-12-07 19:52:08.281267556 +0000 UTC m=+0.175853880 container start de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201 (image=quay.io/ceph/ceph:v19, name=naughty_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:08 np0005549633 podman[89876]: 2025-12-07 19:52:08.285563221 +0000 UTC m=+0.180149575 container attach de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201 (image=quay.io/ceph/ceph:v19, name=naughty_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:08 np0005549633 ceph-mon[74384]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:08 np0005549633 ceph-mon[74384]: Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:08 np0005549633 ceph-mon[74384]: Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:08 np0005549633 ceph-mon[74384]: Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:08 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/416825148' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  7 14:52:08 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'crash'
Dec  7 14:52:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  7 14:52:08 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4119653853' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 14:52:08 np0005549633 ceph-mgr[74680]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:52:08 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'dashboard'
Dec  7 14:52:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:08.784+0000 7f05a4a96140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'devicehealth'
Dec  7 14:52:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:09.422+0000 7f05a4a96140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 14:52:09 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4119653853' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 14:52:09 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.dyzcyj(active, since 9s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:09 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/4119653853' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  7 14:52:09 np0005549633 systemd[1]: libpod-de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201.scope: Deactivated successfully.
Dec  7 14:52:09 np0005549633 podman[89876]: 2025-12-07 19:52:09.55852209 +0000 UTC m=+1.453108414 container died de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201 (image=quay.io/ceph/ceph:v19, name=naughty_saha, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 14:52:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 14:52:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 14:52:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  from numpy import show_config as show_numpy_config
Dec  7 14:52:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:09.590+0000 7f05a4a96140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'influx'
Dec  7 14:52:09 np0005549633 systemd[1]: var-lib-containers-storage-overlay-381008810b289b967905e145dcc4deb764a5fca392898485970c5a1c547d0d85-merged.mount: Deactivated successfully.
Dec  7 14:52:09 np0005549633 podman[89876]: 2025-12-07 19:52:09.623392654 +0000 UTC m=+1.517979018 container remove de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201 (image=quay.io/ceph/ceph:v19, name=naughty_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 14:52:09 np0005549633 systemd[1]: libpod-conmon-de449d5c9316cf765bb09258acde3caf64ac68cb88c682cf2cc4d1c889a11201.scope: Deactivated successfully.
Dec  7 14:52:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:09.657+0000 7f05a4a96140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'insights'
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'iostat'
Dec  7 14:52:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:09.785+0000 7f05a4a96140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:52:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'k8sevents'
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'localpool'
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mirroring'
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'nfs'
Dec  7 14:52:10 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/4119653853' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  7 14:52:10 np0005549633 python3[90018]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:52:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:10.740+0000 7f05a4a96140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'orchestrator'
Dec  7 14:52:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:10.946+0000 7f05a4a96140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 14:52:10 np0005549633 python3[90089]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765137130.21825-37407-9828974455196/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:52:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:11.020+0000 7f05a4a96140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_support'
Dec  7 14:52:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:11.082+0000 7f05a4a96140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 14:52:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:11.156+0000 7f05a4a96140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'progress'
Dec  7 14:52:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:11.222+0000 7f05a4a96140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'prometheus'
Dec  7 14:52:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:11.548+0000 7f05a4a96140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rbd_support'
Dec  7 14:52:11 np0005549633 python3[90139]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:11.641+0000 7f05a4a96140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'restful'
Dec  7 14:52:11 np0005549633 podman[90140]: 2025-12-07 19:52:11.652307889 +0000 UTC m=+0.066890368 container create 69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941 (image=quay.io/ceph/ceph:v19, name=dreamy_shirley, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 14:52:11 np0005549633 podman[90140]: 2025-12-07 19:52:11.61791925 +0000 UTC m=+0.032501799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:11 np0005549633 systemd[1]: Started libpod-conmon-69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941.scope.
Dec  7 14:52:11 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55906547fad990db4f4c3c712d68e4e4cab4f4907fdafae1335327c72d64d27/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55906547fad990db4f4c3c712d68e4e4cab4f4907fdafae1335327c72d64d27/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:11 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a55906547fad990db4f4c3c712d68e4e4cab4f4907fdafae1335327c72d64d27/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:11 np0005549633 podman[90140]: 2025-12-07 19:52:11.782430416 +0000 UTC m=+0.197012945 container init 69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941 (image=quay.io/ceph/ceph:v19, name=dreamy_shirley, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  7 14:52:11 np0005549633 podman[90140]: 2025-12-07 19:52:11.790521881 +0000 UTC m=+0.205104370 container start 69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941 (image=quay.io/ceph/ceph:v19, name=dreamy_shirley, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:11 np0005549633 podman[90140]: 2025-12-07 19:52:11.794800246 +0000 UTC m=+0.209382735 container attach 69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941 (image=quay.io/ceph/ceph:v19, name=dreamy_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 14:52:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rgw'
Dec  7 14:52:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:12.060+0000 7f05a4a96140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rook'
Dec  7 14:52:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:12.603+0000 7f05a4a96140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'selftest'
Dec  7 14:52:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:12.677+0000 7f05a4a96140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'snap_schedule'
Dec  7 14:52:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:12.757+0000 7f05a4a96140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'stats'
Dec  7 14:52:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'status'
Dec  7 14:52:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:12.909+0000 7f05a4a96140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telegraf'
Dec  7 14:52:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:12.980+0000 7f05a4a96140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:52:12 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telemetry'
Dec  7 14:52:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:13.138+0000 7f05a4a96140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 14:52:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:13.359+0000 7f05a4a96140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'volumes'
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh restarted
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh started
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.dyzcyj(active, since 13s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:13.645+0000 7f05a4a96140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'zabbix'
Dec  7 14:52:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:13.720+0000 7f05a4a96140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dyzcyj
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: ms_deliver_dispatch: unhandled message 0x55f9d0df1860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  1: '-n'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  2: 'mgr.compute-0.dyzcyj'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  3: '-f'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  4: '--setuser'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  5: 'ceph'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  6: '--setgroup'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  7: 'ceph'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  8: '--default-log-to-file=false'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  9: '--default-log-to-journald=true'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr respawn  exe_path /proc/self/exe
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.dyzcyj(active, starting, since 0.0370026s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku restarted
Dec  7 14:52:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku started
Dec  7 14:52:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setuser ceph since I am not root
Dec  7 14:52:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setgroup ceph since I am not root
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: pidfile_write: ignore empty --pid-file
Dec  7 14:52:13 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'alerts'
Dec  7 14:52:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:14.009+0000 7f0b0dd0f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:52:14 np0005549633 ceph-mgr[74680]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:52:14 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'balancer'
Dec  7 14:52:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:14.091+0000 7f0b0dd0f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:52:14 np0005549633 ceph-mgr[74680]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:52:14 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'cephadm'
Dec  7 14:52:14 np0005549633 ceph-mon[74384]: Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:52:14 np0005549633 ceph-mon[74384]: Activating manager daemon compute-0.dyzcyj
Dec  7 14:52:14 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.dyzcyj(active, starting, since 1.05063s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:14 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'crash'
Dec  7 14:52:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:14.980+0000 7f0b0dd0f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:52:14 np0005549633 ceph-mgr[74680]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:52:14 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'dashboard'
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'devicehealth'
Dec  7 14:52:15 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:15.634+0000 7f0b0dd0f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 14:52:15 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 14:52:15 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 14:52:15 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  from numpy import show_config as show_numpy_config
Dec  7 14:52:15 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:15.800+0000 7f0b0dd0f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'influx'
Dec  7 14:52:15 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:15.868+0000 7f0b0dd0f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'insights'
Dec  7 14:52:15 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'iostat'
Dec  7 14:52:16 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:16.017+0000 7f0b0dd0f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'k8sevents'
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'localpool'
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mirroring'
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'nfs'
Dec  7 14:52:16 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:16.958+0000 7f0b0dd0f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:52:16 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'orchestrator'
Dec  7 14:52:17 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:17.160+0000 7f0b0dd0f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 14:52:17 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:17.232+0000 7f0b0dd0f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_support'
Dec  7 14:52:17 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:17.296+0000 7f0b0dd0f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 14:52:17 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:17.372+0000 7f0b0dd0f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'progress'
Dec  7 14:52:17 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:17.441+0000 7f0b0dd0f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'prometheus'
Dec  7 14:52:17 np0005549633 systemd[1]: Stopping User Manager for UID 42477...
Dec  7 14:52:17 np0005549633 systemd[75727]: Activating special unit Exit the Session...
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped target Main User Target.
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped target Basic System.
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped target Paths.
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped target Sockets.
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped target Timers.
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  7 14:52:17 np0005549633 systemd[75727]: Closed D-Bus User Message Bus Socket.
Dec  7 14:52:17 np0005549633 systemd[75727]: Stopped Create User's Volatile Files and Directories.
Dec  7 14:52:17 np0005549633 systemd[75727]: Removed slice User Application Slice.
Dec  7 14:52:17 np0005549633 systemd[75727]: Reached target Shutdown.
Dec  7 14:52:17 np0005549633 systemd[75727]: Finished Exit the Session.
Dec  7 14:52:17 np0005549633 systemd[75727]: Reached target Exit the Session.
Dec  7 14:52:17 np0005549633 systemd[1]: user@42477.service: Deactivated successfully.
Dec  7 14:52:17 np0005549633 systemd[1]: Stopped User Manager for UID 42477.
Dec  7 14:52:17 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:17.780+0000 7f0b0dd0f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rbd_support'
Dec  7 14:52:17 np0005549633 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  7 14:52:17 np0005549633 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  7 14:52:17 np0005549633 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  7 14:52:17 np0005549633 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  7 14:52:17 np0005549633 systemd[1]: Removed slice User Slice of UID 42477.
Dec  7 14:52:17 np0005549633 systemd[1]: user-42477.slice: Consumed 36.233s CPU time.
Dec  7 14:52:17 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:17.882+0000 7f0b0dd0f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:52:17 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'restful'
Dec  7 14:52:18 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rgw'
Dec  7 14:52:18 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:18.322+0000 7f0b0dd0f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:52:18 np0005549633 ceph-mgr[74680]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:52:18 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rook'
Dec  7 14:52:18 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:18.917+0000 7f0b0dd0f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:52:18 np0005549633 ceph-mgr[74680]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:52:18 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'selftest'
Dec  7 14:52:18 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:18.996+0000 7f0b0dd0f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:52:18 np0005549633 ceph-mgr[74680]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:52:18 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'snap_schedule'
Dec  7 14:52:19 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:19.079+0000 7f0b0dd0f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'stats'
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'status'
Dec  7 14:52:19 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:19.231+0000 7f0b0dd0f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telegraf'
Dec  7 14:52:19 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:19.311+0000 7f0b0dd0f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telemetry'
Dec  7 14:52:19 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:19.468+0000 7f0b0dd0f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 14:52:19 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh restarted
Dec  7 14:52:19 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh started
Dec  7 14:52:19 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.dyzcyj(active, starting, since 5s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:19 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:19.687+0000 7f0b0dd0f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'volumes'
Dec  7 14:52:19 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:19.956+0000 7f0b0dd0f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:52:19 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'zabbix'
Dec  7 14:52:20 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:52:20.031+0000 7f0b0dd0f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dyzcyj
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: ms_deliver_dispatch: unhandled message 0x563c7c039860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map Activating!
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map I am now activating
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.dyzcyj(active, starting, since 0.0387523s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-0.dyzcyj", "id": "compute-0.dyzcyj"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.cgejnh", "id": "compute-1.cgejnh"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-1.cgejnh", "id": "compute-1.cgejnh"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.orbdku", "id": "compute-2.orbdku"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr metadata", "who": "compute-2.orbdku", "id": "compute-2.orbdku"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e1 all = 1
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: balancer
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Starting
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Manager daemon compute-0.dyzcyj is now available
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:52:20
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: cephadm
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: crash
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: dashboard
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO sso] Loading SSO DB version=1
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: devicehealth
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: iostat
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Starting
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: nfs
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: orchestrator
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: pg_autoscaler
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: progress
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [progress INFO root] Loading...
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f0a900c9160>, <progress.module.GhostEvent object at 0x7f0a900c93d0>, <progress.module.GhostEvent object at 0x7f0a900c9400>, <progress.module.GhostEvent object at 0x7f0a900c9430>, <progress.module.GhostEvent object at 0x7f0a900c9460>, <progress.module.GhostEvent object at 0x7f0a900c9490>, <progress.module.GhostEvent object at 0x7f0a900c94c0>, <progress.module.GhostEvent object at 0x7f0a900c94f0>, <progress.module.GhostEvent object at 0x7f0a900c9520>, <progress.module.GhostEvent object at 0x7f0a900c9550>] historic events
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] recovery thread starting
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] starting setup
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: rbd_support
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: restful
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku restarted
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku started
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: status
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: telemetry
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [restful WARNING root] server not running: no certificate configured
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] PerfHandler: starting
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TaskHandler: starting
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"} v 0)
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: volumes
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] setup complete
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  7 14:52:20 np0005549633 systemd[1]: Created slice User Slice of UID 42477.
Dec  7 14:52:20 np0005549633 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  7 14:52:20 np0005549633 systemd-logind[797]: New session 35 of user ceph-admin.
Dec  7 14:52:20 np0005549633 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  7 14:52:20 np0005549633 systemd[1]: Starting User Manager for UID 42477...
Dec  7 14:52:20 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.module] Engine started.
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: Activating manager daemon compute-0.dyzcyj
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: Manager daemon compute-0.dyzcyj is now available
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:52:20 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:52:20 np0005549633 systemd[90343]: Queued start job for default target Main User Target.
Dec  7 14:52:20 np0005549633 systemd[90343]: Created slice User Application Slice.
Dec  7 14:52:20 np0005549633 systemd[90343]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  7 14:52:20 np0005549633 systemd[90343]: Started Daily Cleanup of User's Temporary Directories.
Dec  7 14:52:20 np0005549633 systemd[90343]: Reached target Paths.
Dec  7 14:52:20 np0005549633 systemd[90343]: Reached target Timers.
Dec  7 14:52:20 np0005549633 systemd[90343]: Starting D-Bus User Message Bus Socket...
Dec  7 14:52:20 np0005549633 systemd[90343]: Starting Create User's Volatile Files and Directories...
Dec  7 14:52:20 np0005549633 systemd[90343]: Finished Create User's Volatile Files and Directories.
Dec  7 14:52:20 np0005549633 systemd[90343]: Listening on D-Bus User Message Bus Socket.
Dec  7 14:52:20 np0005549633 systemd[90343]: Reached target Sockets.
Dec  7 14:52:20 np0005549633 systemd[90343]: Reached target Basic System.
Dec  7 14:52:20 np0005549633 systemd[90343]: Reached target Main User Target.
Dec  7 14:52:20 np0005549633 systemd[90343]: Startup finished in 152ms.
Dec  7 14:52:20 np0005549633 systemd[1]: Started User Manager for UID 42477.
Dec  7 14:52:20 np0005549633 systemd[1]: Started Session 35 of User ceph-admin.
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.dyzcyj(active, since 1.2469s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  7 14:52:21 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0[74380]: 2025-12-07T19:52:21.313+0000 7f5e0b69e640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v3: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e2 new map
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-07T19:52:21:314395+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:21.314295+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 14:52:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:21] ENGINE Bus STARTING
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:21] ENGINE Bus STARTING
Dec  7 14:52:21 np0005549633 systemd[1]: libpod-69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941.scope: Deactivated successfully.
Dec  7 14:52:21 np0005549633 podman[90140]: 2025-12-07 19:52:21.38445492 +0000 UTC m=+9.799037389 container died 69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941 (image=quay.io/ceph/ceph:v19, name=dreamy_shirley, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:21 np0005549633 systemd[1]: var-lib-containers-storage-overlay-a55906547fad990db4f4c3c712d68e4e4cab4f4907fdafae1335327c72d64d27-merged.mount: Deactivated successfully.
Dec  7 14:52:21 np0005549633 podman[90140]: 2025-12-07 19:52:21.433836979 +0000 UTC m=+9.848419428 container remove 69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941 (image=quay.io/ceph/ceph:v19, name=dreamy_shirley, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:21 np0005549633 systemd[1]: libpod-conmon-69098b1878f1d289af39d795fbe41ae5c0ebb25f5701e04751f372abda2b3941.scope: Deactivated successfully.
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:21] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:21] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:21] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:21] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:21] ENGINE Bus STARTED
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:21] ENGINE Bus STARTED
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:52:21] ENGINE Client ('192.168.122.100', 46272) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:52:21 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:52:21] ENGINE Client ('192.168.122.100', 46272) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:52:21 np0005549633 podman[90536]: 2025-12-07 19:52:21.791447283 +0000 UTC m=+0.078730774 container exec a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:21 np0005549633 python3[90528]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:21 np0005549633 podman[90557]: 2025-12-07 19:52:21.873762822 +0000 UTC m=+0.060093776 container create 49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd (image=quay.io/ceph/ceph:v19, name=eager_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:21 np0005549633 podman[90536]: 2025-12-07 19:52:21.894981499 +0000 UTC m=+0.182265020 container exec_died a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  7 14:52:21 np0005549633 systemd[1]: Started libpod-conmon-49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd.scope.
Dec  7 14:52:21 np0005549633 podman[90557]: 2025-12-07 19:52:21.847632514 +0000 UTC m=+0.033963518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:21 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5287fd0891f7e368464c185bdaf10bf8fb5a6629b337b161c7ee5854b3d0a65e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5287fd0891f7e368464c185bdaf10bf8fb5a6629b337b161c7ee5854b3d0a65e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:21 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5287fd0891f7e368464c185bdaf10bf8fb5a6629b337b161c7ee5854b3d0a65e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:21 np0005549633 podman[90557]: 2025-12-07 19:52:21.996173452 +0000 UTC m=+0.182504416 container init 49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd (image=quay.io/ceph/ceph:v19, name=eager_euler, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:22 np0005549633 podman[90557]: 2025-12-07 19:52:22.009267213 +0000 UTC m=+0.195598137 container start 49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd (image=quay.io/ceph/ceph:v19, name=eager_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:22 np0005549633 podman[90557]: 2025-12-07 19:52:22.013755373 +0000 UTC m=+0.200086377 container attach 49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd (image=quay.io/ceph/ceph:v19, name=eager_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:22 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v5: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:22 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Check health
Dec  7 14:52:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:22 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:22 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:22 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 14:52:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:24 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v6: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:21] ENGINE Bus STARTING
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:21] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:21] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:21] ENGINE Bus STARTED
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: [07/Dec/2025:19:52:21] ENGINE Client ('192.168.122.100', 46272) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.dyzcyj(active, since 4s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:24 np0005549633 eager_euler[90584]: Scheduled mds.cephfs update...
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:24 np0005549633 systemd[1]: libpod-49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd.scope: Deactivated successfully.
Dec  7 14:52:24 np0005549633 podman[90557]: 2025-12-07 19:52:24.378263095 +0000 UTC m=+2.564594059 container died 49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd (image=quay.io/ceph/ceph:v19, name=eager_euler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 14:52:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:24 np0005549633 systemd[1]: var-lib-containers-storage-overlay-5287fd0891f7e368464c185bdaf10bf8fb5a6629b337b161c7ee5854b3d0a65e-merged.mount: Deactivated successfully.
Dec  7 14:52:24 np0005549633 podman[90557]: 2025-12-07 19:52:24.439337886 +0000 UTC m=+2.625668840 container remove 49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd (image=quay.io/ceph/ceph:v19, name=eager_euler, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  7 14:52:24 np0005549633 systemd[1]: libpod-conmon-49b3ca60b3c30d1880531bae57a55526fff31877f2dfc0521a18e51eb6e256fd.scope: Deactivated successfully.
Dec  7 14:52:24 np0005549633 python3[90756]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:24 np0005549633 podman[90769]: 2025-12-07 19:52:24.929863382 +0000 UTC m=+0.067949727 container create 9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe (image=quay.io/ceph/ceph:v19, name=romantic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 14:52:24 np0005549633 systemd[1]: Started libpod-conmon-9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe.scope.
Dec  7 14:52:25 np0005549633 podman[90769]: 2025-12-07 19:52:24.906452726 +0000 UTC m=+0.044539091 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:25 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:25 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b39ce0b372c79b51d0c297b44c6f33f1d94e13d6f8185fbc638c8415ab50095/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:25 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b39ce0b372c79b51d0c297b44c6f33f1d94e13d6f8185fbc638c8415ab50095/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:25 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b39ce0b372c79b51d0c297b44c6f33f1d94e13d6f8185fbc638c8415ab50095/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:25 np0005549633 podman[90769]: 2025-12-07 19:52:25.045036579 +0000 UTC m=+0.183123014 container init 9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe (image=quay.io/ceph/ceph:v19, name=romantic_perlman, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:25 np0005549633 podman[90769]: 2025-12-07 19:52:25.05370187 +0000 UTC m=+0.191788255 container start 9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe (image=quay.io/ceph/ceph:v19, name=romantic_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  7 14:52:25 np0005549633 podman[90769]: 2025-12-07 19:52:25.058236991 +0000 UTC m=+0.196323436 container attach 9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe (image=quay.io/ceph/ceph:v19, name=romantic_perlman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.dyzcyj(active, since 5s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14463 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  7 14:52:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:52:25 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:52:26 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v7: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-2 to 128.0M
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  7 14:52:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  7 14:52:26 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:26 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:26 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:26 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:26 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:26 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Unable to set osd_memory_target on compute-2 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Unable to set osd_memory_target on compute-1 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Updating compute-0:/etc/ceph/ceph.conf
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Updating compute-1:/etc/ceph/ceph.conf
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Updating compute-2:/etc/ceph/ceph.conf
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:27 np0005549633 systemd[1]: libpod-9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe.scope: Deactivated successfully.
Dec  7 14:52:27 np0005549633 podman[90769]: 2025-12-07 19:52:27.731005249 +0000 UTC m=+2.869091634 container died 9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe (image=quay.io/ceph/ceph:v19, name=romantic_perlman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  7 14:52:27 np0005549633 systemd[1]: var-lib-containers-storage-overlay-5b39ce0b372c79b51d0c297b44c6f33f1d94e13d6f8185fbc638c8415ab50095-merged.mount: Deactivated successfully.
Dec  7 14:52:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:27 np0005549633 podman[90769]: 2025-12-07 19:52:27.795879543 +0000 UTC m=+2.933965918 container remove 9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe (image=quay.io/ceph/ceph:v19, name=romantic_perlman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:27 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:27 np0005549633 systemd[1]: libpod-conmon-9ef1b0276afc3f5c4b857cc6bdc2a3790ac5db91e93e53de27080a3a6e574abe.scope: Deactivated successfully.
Dec  7 14:52:28 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v10: 101 pgs: 1 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Dec  7 14:52:28 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:28 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:28 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:28 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  7 14:52:28 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:28 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:28 np0005549633 python3[91563]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid glance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:28 np0005549633 podman[91614]: 2025-12-07 19:52:28.703715997 +0000 UTC m=+0.072117238 container create a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50 (image=quay.io/ceph/ceph:v19, name=bold_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:28 np0005549633 systemd[1]: Started libpod-conmon-a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50.scope.
Dec  7 14:52:28 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:28 np0005549633 podman[91614]: 2025-12-07 19:52:28.674375803 +0000 UTC m=+0.042777114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:28 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d9a4ea60e28b370801f04fe701472ef8aa77968707491bca71119822fac102/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:28 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d9a4ea60e28b370801f04fe701472ef8aa77968707491bca71119822fac102/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:28 np0005549633 podman[91614]: 2025-12-07 19:52:28.78396248 +0000 UTC m=+0.152363751 container init a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50 (image=quay.io/ceph/ceph:v19, name=bold_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 14:52:28 np0005549633 podman[91614]: 2025-12-07 19:52:28.793524806 +0000 UTC m=+0.161926087 container start a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50 (image=quay.io/ceph/ceph:v19, name=bold_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  7 14:52:28 np0005549633 podman[91614]: 2025-12-07 19:52:28.797669067 +0000 UTC m=+0.166070338 container attach a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50 (image=quay.io/ceph/ceph:v19, name=bold_meitner, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  7 14:52:28 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.conf
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.dyzcyj(active, since 9s), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:29 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev cae26a65-c2c6-47ab-b388-c5be86a7bf4d (Updating node-exporter deployment (+3 -> 3))
Dec  7 14:52:29 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec  7 14:52:29 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec  7 14:52:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  7 14:52:30 np0005549633 systemd[1]: Reloading.
Dec  7 14:52:30 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v12: 101 pgs: 1 unknown, 100 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2900692685' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  7 14:52:30 np0005549633 ceph-mgr[74680]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec  7 14:52:30 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:52:30 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: Updating compute-1:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: Updating compute-2:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: Updating compute-0:/var/lib/ceph/a8ac706f-8288-541e-8e56-e1124d9b483d/config/ceph.client.admin.keyring
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:30 np0005549633 ceph-mon[74384]: Deploying daemon node-exporter.compute-0 on compute-0
Dec  7 14:52:30 np0005549633 systemd[1]: Reloading.
Dec  7 14:52:30 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:52:30 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:52:30 np0005549633 systemd[1]: Starting Ceph node-exporter.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:52:31 np0005549633 bash[92103]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec  7 14:52:31 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  7 14:52:31 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2900692685' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  7 14:52:31 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  7 14:52:31 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  7 14:52:31 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2900692685' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  7 14:52:31 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/2900692685' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  7 14:52:31 np0005549633 bash[92103]: Getting image source signatures
Dec  7 14:52:31 np0005549633 bash[92103]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec  7 14:52:31 np0005549633 bash[92103]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec  7 14:52:31 np0005549633 bash[92103]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec  7 14:52:32 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v15: 102 pgs: 102 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 882 B/s rd, 1.3 KiB/s wr, 3 op/s
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  7 14:52:32 np0005549633 bash[92103]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec  7 14:52:32 np0005549633 bash[92103]: Writing manifest to image destination
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 14:52:32 np0005549633 podman[92103]: 2025-12-07 19:52:32.211534114 +0000 UTC m=+1.181898138 container create 738fbf4b61e3e049ea6c6ad82a2f478b4ef919ad4cb7a6647209e9c5acce1efb (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:52:32 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 47 pg[10.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:52:32 np0005549633 podman[92103]: 2025-12-07 19:52:32.190993105 +0000 UTC m=+1.161357159 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec  7 14:52:32 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae5647ed615eb85ab7b0edf7187b912932ef3ba3910cbde29e5d5c895826735a/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:32 np0005549633 podman[92103]: 2025-12-07 19:52:32.298010114 +0000 UTC m=+1.268374158 container init 738fbf4b61e3e049ea6c6ad82a2f478b4ef919ad4cb7a6647209e9c5acce1efb (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:52:32 np0005549633 podman[92103]: 2025-12-07 19:52:32.30835426 +0000 UTC m=+1.278718284 container start 738fbf4b61e3e049ea6c6ad82a2f478b4ef919ad4cb7a6647209e9c5acce1efb (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:52:32 np0005549633 bash[92103]: 738fbf4b61e3e049ea6c6ad82a2f478b4ef919ad4cb7a6647209e9c5acce1efb
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.319Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.320Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec  7 14:52:32 np0005549633 systemd[1]: Started Ceph node-exporter.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.325Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.325Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=arp
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=bcache
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=bonding
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=cpu
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=dmi
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=edac
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=entropy
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=filefd
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=hwmon
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=netclass
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=netdev
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=netstat
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=nfs
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=nvme
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.326Z caller=node_exporter.go:117 level=info collector=os
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=pressure
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=rapl
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=selinux
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=softnet
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=stat
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=textfile
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=time
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=uname
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=xfs
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.327Z caller=node_exporter.go:117 level=info collector=zfs
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.328Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec  7 14:52:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0[92228]: ts=2025-12-07T19:52:32.328Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:32 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec  7 14:52:32 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec  7 14:52:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:33 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  7 14:52:33 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  7 14:52:33 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:33 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:33 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:33 np0005549633 ceph-mon[74384]: Deploying daemon node-exporter.compute-1 on compute-1
Dec  7 14:52:34 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v17: 103 pgs: 1 unknown, 102 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 796 B/s rd, 1.2 KiB/s wr, 2 op/s
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  7 14:52:34 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 48 pg[10.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [1] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 14:52:34 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:34 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec  7 14:52:34 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:35 np0005549633 ceph-mon[74384]: Deploying daemon node-exporter.compute-2 on compute-2
Dec  7 14:52:36 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v20: 104 pgs: 2 unknown, 102 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 831 B/s rd, 1.2 KiB/s wr, 3 op/s
Dec  7 14:52:36 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  7 14:52:36 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 14:52:36 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  7 14:52:36 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  7 14:52:36 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  7 14:52:36 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  7 14:52:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  7 14:52:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  7 14:52:37 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  7 14:52:37 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 51 pg[12.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:52:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  7 14:52:37 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 14:52:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:38 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v23: 105 pgs: 1 unknown, 104 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev cae26a65-c2c6-47ab-b388-c5be86a7bf4d (Updating node-exporter deployment (+3 -> 3))
Dec  7 14:52:38 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event cae26a65-c2c6-47ab-b388-c5be86a7bf4d (Updating node-exporter deployment (+3 -> 3)) in 9 seconds
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 14:52:38 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 52 pg[12.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [1] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  7 14:52:38 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  7 14:52:38 np0005549633 podman[92332]: 2025-12-07 19:52:38.930099282 +0000 UTC m=+0.067847104 container create 4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 14:52:38 np0005549633 systemd[1]: Started libpod-conmon-4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1.scope.
Dec  7 14:52:38 np0005549633 podman[92332]: 2025-12-07 19:52:38.90162575 +0000 UTC m=+0.039373632 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:39 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:39 np0005549633 podman[92332]: 2025-12-07 19:52:39.046174553 +0000 UTC m=+0.183922395 container init 4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  7 14:52:39 np0005549633 podman[92332]: 2025-12-07 19:52:39.057201128 +0000 UTC m=+0.194948950 container start 4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meitner, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:39 np0005549633 podman[92332]: 2025-12-07 19:52:39.061996855 +0000 UTC m=+0.199744747 container attach 4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meitner, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  7 14:52:39 np0005549633 stupefied_meitner[92348]: 167 167
Dec  7 14:52:39 np0005549633 systemd[1]: libpod-4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1.scope: Deactivated successfully.
Dec  7 14:52:39 np0005549633 podman[92332]: 2025-12-07 19:52:39.066912597 +0000 UTC m=+0.204660429 container died 4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meitner, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:39 np0005549633 systemd[1]: var-lib-containers-storage-overlay-9448cfead5befacde33effd9de6a36bc2a27c3df86931e08cefa9bb7c64e0c33-merged.mount: Deactivated successfully.
Dec  7 14:52:39 np0005549633 podman[92332]: 2025-12-07 19:52:39.125880623 +0000 UTC m=+0.263628455 container remove 4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_meitner, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:39 np0005549633 systemd[1]: libpod-conmon-4d17d995807fd33a50b8b51aeb14e05eaa31c8d40779393be61081bf7c4d15e1.scope: Deactivated successfully.
Dec  7 14:52:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  7 14:52:39 np0005549633 podman[92371]: 2025-12-07 19:52:39.390449161 +0000 UTC m=+0.075338464 container create 335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:39 np0005549633 systemd[1]: Started libpod-conmon-335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb.scope.
Dec  7 14:52:39 np0005549633 podman[92371]: 2025-12-07 19:52:39.358002813 +0000 UTC m=+0.042892176 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:39 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 14:52:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  7 14:52:39 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:39 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  7 14:52:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c0e40ef1a1e81bc48bce985df5f0cab7656847dfc63a042d8bffd41d61ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:39 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  7 14:52:39 np0005549633 bold_meitner[91675]: could not fetch user info: no user info saved
Dec  7 14:52:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c0e40ef1a1e81bc48bce985df5f0cab7656847dfc63a042d8bffd41d61ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c0e40ef1a1e81bc48bce985df5f0cab7656847dfc63a042d8bffd41d61ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c0e40ef1a1e81bc48bce985df5f0cab7656847dfc63a042d8bffd41d61ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/674c0e40ef1a1e81bc48bce985df5f0cab7656847dfc63a042d8bffd41d61ca8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:39 np0005549633 podman[92371]: 2025-12-07 19:52:39.513055756 +0000 UTC m=+0.197945099 container init 335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:39 np0005549633 podman[92371]: 2025-12-07 19:52:39.531274443 +0000 UTC m=+0.216163736 container start 335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Dec  7 14:52:39 np0005549633 podman[92371]: 2025-12-07 19:52:39.535867635 +0000 UTC m=+0.220756948 container attach 335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:39 np0005549633 systemd[1]: libpod-a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50.scope: Deactivated successfully.
Dec  7 14:52:39 np0005549633 podman[91614]: 2025-12-07 19:52:39.584182337 +0000 UTC m=+10.952583608 container died a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50 (image=quay.io/ceph/ceph:v19, name=bold_meitner, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:39 np0005549633 systemd[1]: var-lib-containers-storage-overlay-26d9a4ea60e28b370801f04fe701472ef8aa77968707491bca71119822fac102-merged.mount: Deactivated successfully.
Dec  7 14:52:39 np0005549633 podman[91614]: 2025-12-07 19:52:39.653722255 +0000 UTC m=+11.022123526 container remove a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50 (image=quay.io/ceph/ceph:v19, name=bold_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:39 np0005549633 systemd[1]: libpod-conmon-a98b61a0d3c318c844417431b21d6df113b7fc1426134964ea75b3d43b1b1b50.scope: Deactivated successfully.
Dec  7 14:52:39 np0005549633 adoring_jang[92387]: --> passed data devices: 0 physical, 1 LVM
Dec  7 14:52:39 np0005549633 adoring_jang[92387]: --> All data devices are unavailable
Dec  7 14:52:40 np0005549633 systemd[1]: libpod-335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb.scope: Deactivated successfully.
Dec  7 14:52:40 np0005549633 podman[92371]: 2025-12-07 19:52:40.037262962 +0000 UTC m=+0.722152275 container died 335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:40 np0005549633 systemd[1]: var-lib-containers-storage-overlay-674c0e40ef1a1e81bc48bce985df5f0cab7656847dfc63a042d8bffd41d61ca8-merged.mount: Deactivated successfully.
Dec  7 14:52:40 np0005549633 python3[92444]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="glance" --display-name="Glance S3 User" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:40 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v26: 105 pgs: 1 unknown, 104 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:52:40 np0005549633 podman[92371]: 2025-12-07 19:52:40.095706482 +0000 UTC m=+0.780595755 container remove 335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:40 np0005549633 systemd[1]: libpod-conmon-335eb6364e86503aa4d52d6f679868e4ed7e46cde313d2ad6fd0146ddda3f3fb.scope: Deactivated successfully.
Dec  7 14:52:40 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 11 completed events
Dec  7 14:52:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:52:40 np0005549633 podman[92461]: 2025-12-07 19:52:40.181011832 +0000 UTC m=+0.062674246 container create 8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091 (image=quay.io/ceph/ceph:v19, name=goofy_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 14:52:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:40 np0005549633 systemd[1]: Started libpod-conmon-8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091.scope.
Dec  7 14:52:40 np0005549633 podman[92461]: 2025-12-07 19:52:40.152468349 +0000 UTC m=+0.034130843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:40 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:40 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585b2408e8ca29dedac109e26269e36a8ee8ba0d7c717810e9cbacfdbbb344fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:40 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/585b2408e8ca29dedac109e26269e36a8ee8ba0d7c717810e9cbacfdbbb344fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:40 np0005549633 podman[92461]: 2025-12-07 19:52:40.306844384 +0000 UTC m=+0.188506828 container init 8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091 (image=quay.io/ceph/ceph:v19, name=goofy_shockley, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 14:52:40 np0005549633 podman[92461]: 2025-12-07 19:52:40.323064607 +0000 UTC m=+0.204727051 container start 8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091 (image=quay.io/ceph/ceph:v19, name=goofy_shockley, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:40 np0005549633 podman[92461]: 2025-12-07 19:52:40.331620786 +0000 UTC m=+0.213283250 container attach 8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091 (image=quay.io/ceph/ceph:v19, name=goofy_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:40 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 14:52:40 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3358853812' entity='client.admin' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  7 14:52:40 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]: {
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "user_id": "glance",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "display_name": "Glance S3 User",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "email": "",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "suspended": 0,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "max_buckets": 1000,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "subusers": [],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "keys": [
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        {
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:            "user": "glance",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:            "access_key": "CHLP4NTXOGGDYDEBBBFN",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:            "secret_key": "mEhH6d96rHd8q59YNcnT1gcVoA57UlP0kJKS7Fr5",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:            "active": true,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:            "create_date": "2025-12-07T19:52:40.538413Z"
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        }
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    ],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "swift_keys": [],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "caps": [],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "op_mask": "read, write, delete",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "default_placement": "",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "default_storage_class": "",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "placement_tags": [],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "bucket_quota": {
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "enabled": false,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "check_on_raw": false,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "max_size": -1,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "max_size_kb": 0,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "max_objects": -1
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    },
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "user_quota": {
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "enabled": false,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "check_on_raw": false,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "max_size": -1,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "max_size_kb": 0,
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:        "max_objects": -1
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    },
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "temp_url_keys": [],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "type": "rgw",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "mfa_ids": [],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "account_id": "",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "path": "/",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "create_date": "2025-12-07T19:52:40.537925Z",
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "tags": [],
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]:    "group_ids": []
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]: }
Dec  7 14:52:40 np0005549633 goofy_shockley[92499]: 
Dec  7 14:52:40 np0005549633 systemd[1]: libpod-8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091.scope: Deactivated successfully.
Dec  7 14:52:40 np0005549633 podman[92461]: 2025-12-07 19:52:40.611308688 +0000 UTC m=+0.492971132 container died 8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091 (image=quay.io/ceph/ceph:v19, name=goofy_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:40 np0005549633 systemd[1]: var-lib-containers-storage-overlay-585b2408e8ca29dedac109e26269e36a8ee8ba0d7c717810e9cbacfdbbb344fc-merged.mount: Deactivated successfully.
Dec  7 14:52:40 np0005549633 podman[92461]: 2025-12-07 19:52:40.683980849 +0000 UTC m=+0.565643293 container remove 8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091 (image=quay.io/ceph/ceph:v19, name=goofy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  7 14:52:40 np0005549633 systemd[1]: libpod-conmon-8c41057f6e8fe9737c8a3bd143ad67ef451614e37b3de8cf9dfbb65756c4d091.scope: Deactivated successfully.
Dec  7 14:52:40 np0005549633 podman[92663]: 2025-12-07 19:52:40.881669431 +0000 UTC m=+0.072320993 container create e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  7 14:52:40 np0005549633 systemd[1]: Started libpod-conmon-e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2.scope.
Dec  7 14:52:40 np0005549633 podman[92663]: 2025-12-07 19:52:40.85020804 +0000 UTC m=+0.040859652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:40 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:41 np0005549633 podman[92663]: 2025-12-07 19:52:41.009434655 +0000 UTC m=+0.200086267 container init e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mendeleev, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:41 np0005549633 podman[92663]: 2025-12-07 19:52:41.02311546 +0000 UTC m=+0.213767012 container start e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  7 14:52:41 np0005549633 infallible_mendeleev[92704]: 167 167
Dec  7 14:52:41 np0005549633 podman[92663]: 2025-12-07 19:52:41.029771387 +0000 UTC m=+0.220423009 container attach e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:41 np0005549633 systemd[1]: libpod-e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2.scope: Deactivated successfully.
Dec  7 14:52:41 np0005549633 podman[92663]: 2025-12-07 19:52:41.030187939 +0000 UTC m=+0.220839511 container died e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:41 np0005549633 systemd[1]: var-lib-containers-storage-overlay-16d8326304462e2654d4a4eec206f47205814e11d70c81d678c5425809e51e12-merged.mount: Deactivated successfully.
Dec  7 14:52:41 np0005549633 podman[92663]: 2025-12-07 19:52:41.081655143 +0000 UTC m=+0.272306715 container remove e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_mendeleev, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  7 14:52:41 np0005549633 systemd[1]: libpod-conmon-e2e5502b268f8ba3b80f3d3cce69994e4c7b0bc557abebb7a2cdfba4814089c2.scope: Deactivated successfully.
Dec  7 14:52:41 np0005549633 podman[92724]: 2025-12-07 19:52:41.208302748 +0000 UTC m=+0.077219244 container create 1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41 (image=quay.io/ceph/ceph:v19, name=stupefied_lewin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:52:41 np0005549633 systemd[1]: Started libpod-conmon-1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41.scope.
Dec  7 14:52:41 np0005549633 podman[92724]: 2025-12-07 19:52:41.18145339 +0000 UTC m=+0.050369976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:41 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd44c294e194409fca316692a11563cab01dceb534a9c2321aa4add2d93cff06/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd44c294e194409fca316692a11563cab01dceb534a9c2321aa4add2d93cff06/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:41 np0005549633 podman[92724]: 2025-12-07 19:52:41.327521823 +0000 UTC m=+0.196438359 container init 1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41 (image=quay.io/ceph/ceph:v19, name=stupefied_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 14:52:41 np0005549633 podman[92745]: 2025-12-07 19:52:41.333309827 +0000 UTC m=+0.069405225 container create 811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_banzai, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 14:52:41 np0005549633 podman[92724]: 2025-12-07 19:52:41.337287414 +0000 UTC m=+0.206203910 container start 1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41 (image=quay.io/ceph/ceph:v19, name=stupefied_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 14:52:41 np0005549633 podman[92724]: 2025-12-07 19:52:41.340823488 +0000 UTC m=+0.209740064 container attach 1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41 (image=quay.io/ceph/ceph:v19, name=stupefied_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 14:52:41 np0005549633 systemd[1]: Started libpod-conmon-811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d.scope.
Dec  7 14:52:41 np0005549633 podman[92745]: 2025-12-07 19:52:41.30757251 +0000 UTC m=+0.043667898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:41 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840d3e4ae30faae72e05e84867ca1b69648f1298f539203aa92123d3f72adfa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840d3e4ae30faae72e05e84867ca1b69648f1298f539203aa92123d3f72adfa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840d3e4ae30faae72e05e84867ca1b69648f1298f539203aa92123d3f72adfa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:41 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840d3e4ae30faae72e05e84867ca1b69648f1298f539203aa92123d3f72adfa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:41 np0005549633 podman[92745]: 2025-12-07 19:52:41.435087957 +0000 UTC m=+0.171183395 container init 811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_banzai, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:41 np0005549633 podman[92745]: 2025-12-07 19:52:41.45057557 +0000 UTC m=+0.186670968 container start 811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:41 np0005549633 podman[92745]: 2025-12-07 19:52:41.455008579 +0000 UTC m=+0.191103977 container attach 811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:41 np0005549633 ceph-mon[74384]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]: {
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "user_id": "glance",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "display_name": "Glance S3 User",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "email": "",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "suspended": 0,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "max_buckets": 1000,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "subusers": [],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "keys": [
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        {
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:            "user": "glance",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:            "access_key": "CHLP4NTXOGGDYDEBBBFN",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:            "secret_key": "mEhH6d96rHd8q59YNcnT1gcVoA57UlP0kJKS7Fr5",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:            "active": true,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:            "create_date": "2025-12-07T19:52:40.538413Z"
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        }
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    ],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "swift_keys": [],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "caps": [],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "op_mask": "read, write, delete",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "default_placement": "",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "default_storage_class": "",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "placement_tags": [],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "bucket_quota": {
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "enabled": false,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "check_on_raw": false,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "max_size": -1,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "max_size_kb": 0,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "max_objects": -1
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    },
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "user_quota": {
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "enabled": false,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "check_on_raw": false,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "max_size": -1,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "max_size_kb": 0,
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:        "max_objects": -1
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    },
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "temp_url_keys": [],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "type": "rgw",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "mfa_ids": [],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "account_id": "",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "path": "/",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "create_date": "2025-12-07T19:52:40.537925Z",
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "tags": [],
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]:    "group_ids": []
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]: }
Dec  7 14:52:41 np0005549633 stupefied_lewin[92752]: 
Dec  7 14:52:41 np0005549633 systemd[1]: libpod-1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41.scope: Deactivated successfully.
Dec  7 14:52:41 np0005549633 podman[92724]: 2025-12-07 19:52:41.657827957 +0000 UTC m=+0.526744503 container died 1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41 (image=quay.io/ceph/ceph:v19, name=stupefied_lewin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Dec  7 14:52:41 np0005549633 systemd[1]: var-lib-containers-storage-overlay-cd44c294e194409fca316692a11563cab01dceb534a9c2321aa4add2d93cff06-merged.mount: Deactivated successfully.
Dec  7 14:52:41 np0005549633 podman[92724]: 2025-12-07 19:52:41.773165628 +0000 UTC m=+0.642082124 container remove 1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41 (image=quay.io/ceph/ceph:v19, name=stupefied_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  7 14:52:41 np0005549633 systemd[1]: libpod-conmon-1351dd0913568401d93df19761cecc8cb639406cbaf270eae3e37509f8ba9a41.scope: Deactivated successfully.
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]: {
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:    "1": [
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:        {
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "devices": [
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "/dev/loop3"
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            ],
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "lv_name": "ceph_lv0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "lv_size": "21470642176",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a8ac706f-8288-541e-8e56-e1124d9b483d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=bde32eb9-6f67-49a9-82c5-0c88a97712bc,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "lv_uuid": "SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "name": "ceph_lv0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "tags": {
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.block_uuid": "SG7yNj-LGVN-UKbN-ZzcX-0VY6-5Amo-UTju0q",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.cephx_lockbox_secret": "",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.cluster_fsid": "a8ac706f-8288-541e-8e56-e1124d9b483d",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.cluster_name": "ceph",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.crush_device_class": "",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.encrypted": "0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.osd_fsid": "bde32eb9-6f67-49a9-82c5-0c88a97712bc",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.osd_id": "1",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.type": "block",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.vdo": "0",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:                "ceph.with_tpm": "0"
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            },
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "type": "block",
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:            "vg_name": "ceph_vg0"
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:        }
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]:    ]
Dec  7 14:52:41 np0005549633 amazing_banzai[92765]: }
Dec  7 14:52:41 np0005549633 systemd[1]: libpod-811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d.scope: Deactivated successfully.
Dec  7 14:52:41 np0005549633 podman[92745]: 2025-12-07 19:52:41.882615913 +0000 UTC m=+0.618711291 container died 811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_banzai, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:41 np0005549633 systemd[1]: var-lib-containers-storage-overlay-840d3e4ae30faae72e05e84867ca1b69648f1298f539203aa92123d3f72adfa9-merged.mount: Deactivated successfully.
Dec  7 14:52:41 np0005549633 podman[92745]: 2025-12-07 19:52:41.934419187 +0000 UTC m=+0.670514565 container remove 811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 14:52:41 np0005549633 systemd[1]: libpod-conmon-811cded603fa2fd948c17864f5f74c24592b3e62a01f02dbca4a528286d96a9d.scope: Deactivated successfully.
Dec  7 14:52:42 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v27: 105 pgs: 105 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 692 B/s wr, 3 op/s
Dec  7 14:52:42 np0005549633 python3[93009]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  7 14:52:42 np0005549633 podman[93081]: 2025-12-07 19:52:42.650234681 +0000 UTC m=+0.068714776 container create a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:42 np0005549633 podman[93081]: 2025-12-07 19:52:42.619470509 +0000 UTC m=+0.037950634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:42 np0005549633 systemd[1]: Started libpod-conmon-a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd.scope.
Dec  7 14:52:42 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:42 np0005549633 podman[93081]: 2025-12-07 19:52:42.783452791 +0000 UTC m=+0.201932856 container init a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:52:42 np0005549633 podman[93081]: 2025-12-07 19:52:42.79168136 +0000 UTC m=+0.210161455 container start a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  7 14:52:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:42 np0005549633 podman[93081]: 2025-12-07 19:52:42.796925211 +0000 UTC m=+0.215405286 container attach a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 14:52:42 np0005549633 recursing_poitras[93136]: 167 167
Dec  7 14:52:42 np0005549633 systemd[1]: libpod-a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd.scope: Deactivated successfully.
Dec  7 14:52:42 np0005549633 podman[93081]: 2025-12-07 19:52:42.800930467 +0000 UTC m=+0.219410592 container died a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 14:52:42 np0005549633 systemd[1]: var-lib-containers-storage-overlay-d753d356a026d84db742d3f3480cc9416dbd83dd3d25e2e74fa38c8d5e8e5ea3-merged.mount: Deactivated successfully.
Dec  7 14:52:42 np0005549633 podman[93081]: 2025-12-07 19:52:42.876105306 +0000 UTC m=+0.294585401 container remove a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 14:52:42 np0005549633 systemd[1]: libpod-conmon-a47b835f771164178821454009c115a832cf947c9fcd36a0b57a185f96e3bfbd.scope: Deactivated successfully.
Dec  7 14:52:42 np0005549633 python3[93140]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765137162.0795352-37453-159070716583528/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=406c66df195c393bad7a9f8899f2c153e3e9e2a3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:52:43 np0005549633 podman[93163]: 2025-12-07 19:52:43.098159039 +0000 UTC m=+0.058548586 container create 7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  7 14:52:43 np0005549633 systemd[1]: Started libpod-conmon-7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76.scope.
Dec  7 14:52:43 np0005549633 podman[93163]: 2025-12-07 19:52:43.066385139 +0000 UTC m=+0.026774686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:43 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c854b7fee4436e8165e762c8aac43bfd95cdfb81a8c443b4c4d8af284196fc2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c854b7fee4436e8165e762c8aac43bfd95cdfb81a8c443b4c4d8af284196fc2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c854b7fee4436e8165e762c8aac43bfd95cdfb81a8c443b4c4d8af284196fc2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c854b7fee4436e8165e762c8aac43bfd95cdfb81a8c443b4c4d8af284196fc2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:43 np0005549633 podman[93163]: 2025-12-07 19:52:43.218413462 +0000 UTC m=+0.178803059 container init 7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  7 14:52:43 np0005549633 podman[93163]: 2025-12-07 19:52:43.229654102 +0000 UTC m=+0.190043649 container start 7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  7 14:52:43 np0005549633 podman[93163]: 2025-12-07 19:52:43.236143115 +0000 UTC m=+0.196532632 container attach 7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 14:52:43 np0005549633 python3[93240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:43 np0005549633 podman[93268]: 2025-12-07 19:52:43.820294431 +0000 UTC m=+0.120234322 container create c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64 (image=quay.io/ceph/ceph:v19, name=practical_tharp, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:52:43 np0005549633 systemd[1]: Started libpod-conmon-c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64.scope.
Dec  7 14:52:43 np0005549633 podman[93268]: 2025-12-07 19:52:43.799255279 +0000 UTC m=+0.099195190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:43 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f831099e6d17bacb78390b02c269854f46445e8956940a73d5b9402e59d70308/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:43 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f831099e6d17bacb78390b02c269854f46445e8956940a73d5b9402e59d70308/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:43 np0005549633 podman[93268]: 2025-12-07 19:52:43.918757682 +0000 UTC m=+0.218697663 container init c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64 (image=quay.io/ceph/ceph:v19, name=practical_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:43 np0005549633 podman[93268]: 2025-12-07 19:52:43.927328551 +0000 UTC m=+0.227268452 container start c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64 (image=quay.io/ceph/ceph:v19, name=practical_tharp, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:43 np0005549633 podman[93268]: 2025-12-07 19:52:43.931377419 +0000 UTC m=+0.231317310 container attach c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64 (image=quay.io/ceph/ceph:v19, name=practical_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  7 14:52:44 np0005549633 lvm[93324]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:52:44 np0005549633 lvm[93324]: VG ceph_vg0 finished
Dec  7 14:52:44 np0005549633 nifty_diffie[93204]: {}
Dec  7 14:52:44 np0005549633 systemd[1]: libpod-7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76.scope: Deactivated successfully.
Dec  7 14:52:44 np0005549633 podman[93163]: 2025-12-07 19:52:44.089436742 +0000 UTC m=+1.049826279 container died 7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:44 np0005549633 systemd[1]: libpod-7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76.scope: Consumed 1.381s CPU time.
Dec  7 14:52:44 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v28: 105 pgs: 105 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 601 B/s wr, 3 op/s
Dec  7 14:52:44 np0005549633 systemd[1]: var-lib-containers-storage-overlay-c854b7fee4436e8165e762c8aac43bfd95cdfb81a8c443b4c4d8af284196fc2a-merged.mount: Deactivated successfully.
Dec  7 14:52:44 np0005549633 podman[93163]: 2025-12-07 19:52:44.153972706 +0000 UTC m=+1.114362253 container remove 7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_diffie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:44 np0005549633 systemd[1]: libpod-conmon-7a2cb035c7714987f3364c4d91d3f5cec11d6b9fbfa24ba7515c1001efe2bd76.scope: Deactivated successfully.
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/600595925' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:44 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 0f381602-97f0-43df-a9f3-b647ece09b5c (Updating rgw.rgw deployment (+3 -> 3))
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.hgnhva", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.hgnhva", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/600595925' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.hgnhva", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  7 14:52:44 np0005549633 systemd[1]: libpod-c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64.scope: Deactivated successfully.
Dec  7 14:52:44 np0005549633 podman[93268]: 2025-12-07 19:52:44.96594487 +0000 UTC m=+1.265884791 container died c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64 (image=quay.io/ceph/ceph:v19, name=practical_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:44 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.hgnhva on compute-2
Dec  7 14:52:44 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.hgnhva on compute-2
Dec  7 14:52:45 np0005549633 systemd[1]: var-lib-containers-storage-overlay-f831099e6d17bacb78390b02c269854f46445e8956940a73d5b9402e59d70308-merged.mount: Deactivated successfully.
Dec  7 14:52:45 np0005549633 podman[93268]: 2025-12-07 19:52:45.019915581 +0000 UTC m=+1.319855482 container remove c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64 (image=quay.io/ceph/ceph:v19, name=practical_tharp, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:45 np0005549633 systemd[1]: libpod-conmon-c421f756d515872cfb15dbfdd2ee80857ddf37ae79113d7e971b8798f9778d64.scope: Deactivated successfully.
Dec  7 14:52:45 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 9aa363b7-3382-4d98-87fd-31746e6f300e (Global Recovery Event) in 15 seconds
Dec  7 14:52:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:45 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/600595925' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  7 14:52:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.hgnhva", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:45 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/600595925' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  7 14:52:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.hgnhva", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:45 np0005549633 python3[93397]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:46 np0005549633 podman[93399]: 2025-12-07 19:52:46.018351766 +0000 UTC m=+0.067912455 container create bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d (image=quay.io/ceph/ceph:v19, name=crazy_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec  7 14:52:46 np0005549633 systemd[1]: Started libpod-conmon-bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d.scope.
Dec  7 14:52:46 np0005549633 podman[93399]: 2025-12-07 19:52:45.984676566 +0000 UTC m=+0.034237295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:46 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v29: 105 pgs: 105 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 511 B/s wr, 2 op/s
Dec  7 14:52:46 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:46 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb05f0bb425c491cc57b2d0708ba9390d95539bd5a7aeb7d7ffa9c9411146cb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:46 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb05f0bb425c491cc57b2d0708ba9390d95539bd5a7aeb7d7ffa9c9411146cb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:46 np0005549633 podman[93399]: 2025-12-07 19:52:46.136615036 +0000 UTC m=+0.186175715 container init bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d (image=quay.io/ceph/ceph:v19, name=crazy_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:46 np0005549633 podman[93399]: 2025-12-07 19:52:46.150180247 +0000 UTC m=+0.199740896 container start bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d (image=quay.io/ceph/ceph:v19, name=crazy_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  7 14:52:46 np0005549633 podman[93399]: 2025-12-07 19:52:46.153438125 +0000 UTC m=+0.202998874 container attach bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d (image=quay.io/ceph/ceph:v19, name=crazy_gates, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:46 np0005549633 ceph-mon[74384]: Deploying daemon rgw.rgw.compute-2.hgnhva on compute-2
Dec  7 14:52:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 14:52:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589383663' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 14:52:46 np0005549633 crazy_gates[93415]: 
Dec  7 14:52:46 np0005549633 crazy_gates[93415]: {"fsid":"a8ac706f-8288-541e-8e56-e1124d9b483d","health":{"status":"HEALTH_ERR","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false},"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":103,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":53,"num_osds":3,"num_up_osds":3,"osd_up_since":1765137120,"num_in_osds":3,"osd_in_since":1765137082,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":105}],"num_pgs":105,"num_pools":12,"num_objects":21,"data_bytes":461482,"bytes_used":84672512,"bytes_avail":64327254016,"bytes_total":64411926528,"read_bytes_sec":1791,"write_bytes_sec":511,"read_op_per_sec":1,"write_op_per_sec":0},"fsmap":{"epoch":2,"btime":"2025-12-07T19:52:21:314395+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":4,"modified":"2025-12-07T19:51:27.513002+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.cgejnh":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.orbdku":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"0f381602-97f0-43df-a9f3-b647ece09b5c":{"message":"Updating rgw.rgw deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  7 14:52:46 np0005549633 systemd[1]: libpod-bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d.scope: Deactivated successfully.
Dec  7 14:52:46 np0005549633 podman[93399]: 2025-12-07 19:52:46.59942399 +0000 UTC m=+0.648984649 container died bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d (image=quay.io/ceph/ceph:v19, name=crazy_gates, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:52:46 np0005549633 systemd[1]: var-lib-containers-storage-overlay-fb05f0bb425c491cc57b2d0708ba9390d95539bd5a7aeb7d7ffa9c9411146cb1-merged.mount: Deactivated successfully.
Dec  7 14:52:46 np0005549633 podman[93399]: 2025-12-07 19:52:46.650913846 +0000 UTC m=+0.700474525 container remove bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d (image=quay.io/ceph/ceph:v19, name=crazy_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:46 np0005549633 systemd[1]: libpod-conmon-bfaa33062de9c9e83f3174f6ccef0baa81a2adeb3739a1e04061de87771f921d.scope: Deactivated successfully.
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.whvyeq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.whvyeq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:47 np0005549633 python3[93477]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.whvyeq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:47 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.whvyeq on compute-1
Dec  7 14:52:47 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.whvyeq on compute-1
Dec  7 14:52:47 np0005549633 podman[93478]: 2025-12-07 19:52:47.199187344 +0000 UTC m=+0.077446331 container create b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12 (image=quay.io/ceph/ceph:v19, name=reverent_gauss, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:52:47 np0005549633 systemd[1]: Started libpod-conmon-b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12.scope.
Dec  7 14:52:47 np0005549633 podman[93478]: 2025-12-07 19:52:47.168777772 +0000 UTC m=+0.047036789 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:47 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68ec3d569b0f92f9bc3d9cae7bf718ee0d6260820a80dcb40d00046ce316594/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:47 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f68ec3d569b0f92f9bc3d9cae7bf718ee0d6260820a80dcb40d00046ce316594/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.whvyeq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.whvyeq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:47 np0005549633 podman[93478]: 2025-12-07 19:52:47.316706834 +0000 UTC m=+0.194965901 container init b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12 (image=quay.io/ceph/ceph:v19, name=reverent_gauss, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:47 np0005549633 podman[93478]: 2025-12-07 19:52:47.32967863 +0000 UTC m=+0.207937617 container start b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12 (image=quay.io/ceph/ceph:v19, name=reverent_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 14:52:47 np0005549633 podman[93478]: 2025-12-07 19:52:47.334375455 +0000 UTC m=+0.212634452 container attach b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12 (image=quay.io/ceph/ceph:v19, name=reverent_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461344533' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  7 14:52:47 np0005549633 reverent_gauss[93494]: 
Dec  7 14:52:47 np0005549633 reverent_gauss[93494]: {"epoch":3,"fsid":"a8ac706f-8288-541e-8e56-e1124d9b483d","modified":"2025-12-07T19:50:56.175798Z","created":"2025-12-07T19:48:33.416686Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec  7 14:52:47 np0005549633 reverent_gauss[93494]: dumped monmap epoch 3
Dec  7 14:52:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:47 np0005549633 systemd[1]: libpod-b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12.scope: Deactivated successfully.
Dec  7 14:52:47 np0005549633 podman[93478]: 2025-12-07 19:52:47.817484242 +0000 UTC m=+0.695743229 container died b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12 (image=quay.io/ceph/ceph:v19, name=reverent_gauss, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  7 14:52:47 np0005549633 systemd[1]: var-lib-containers-storage-overlay-f68ec3d569b0f92f9bc3d9cae7bf718ee0d6260820a80dcb40d00046ce316594-merged.mount: Deactivated successfully.
Dec  7 14:52:47 np0005549633 podman[93478]: 2025-12-07 19:52:47.865712741 +0000 UTC m=+0.743971718 container remove b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12 (image=quay.io/ceph/ceph:v19, name=reverent_gauss, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:52:47 np0005549633 systemd[1]: libpod-conmon-b28a4a8591a99df73a8e88c4da47d16d7154099945f9b9fec4a3fc41d5e15b12.scope: Deactivated successfully.
Dec  7 14:52:48 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v30: 105 pgs: 105 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 417 B/s wr, 3 op/s
Dec  7 14:52:48 np0005549633 ceph-mon[74384]: Deploying daemon rgw.rgw.compute-1.whvyeq on compute-1
Dec  7 14:52:48 np0005549633 python3[93556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:48 np0005549633 podman[93557]: 2025-12-07 19:52:48.734640366 +0000 UTC m=+0.079554776 container create 373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6 (image=quay.io/ceph/ceph:v19, name=jolly_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:48 np0005549633 systemd[1]: Started libpod-conmon-373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6.scope.
Dec  7 14:52:48 np0005549633 podman[93557]: 2025-12-07 19:52:48.703632118 +0000 UTC m=+0.048546588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:48 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b6f0e1aac4a016b46cab03248866878822467bbd45d30ca635222aba7f7921/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b6f0e1aac4a016b46cab03248866878822467bbd45d30ca635222aba7f7921/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:48 np0005549633 podman[93557]: 2025-12-07 19:52:48.840590757 +0000 UTC m=+0.185505177 container init 373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6 (image=quay.io/ceph/ceph:v19, name=jolly_tu, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  7 14:52:48 np0005549633 podman[93557]: 2025-12-07 19:52:48.852843875 +0000 UTC m=+0.197758285 container start 373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6 (image=quay.io/ceph/ceph:v19, name=jolly_tu, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  7 14:52:48 np0005549633 podman[93557]: 2025-12-07 19:52:48.857334154 +0000 UTC m=+0.202248614 container attach 373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6 (image=quay.io/ceph/ceph:v19, name=jolly_tu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3857978798' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  7 14:52:49 np0005549633 jolly_tu[93572]: [client.openstack]
Dec  7 14:52:49 np0005549633 jolly_tu[93572]: #011key = AQDk2TVpAAAAABAAK5WGpmx83ckprrCA92n1jw==
Dec  7 14:52:49 np0005549633 jolly_tu[93572]: #011caps mgr = "allow *"
Dec  7 14:52:49 np0005549633 jolly_tu[93572]: #011caps mon = "profile rbd"
Dec  7 14:52:49 np0005549633 jolly_tu[93572]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  7 14:52:49 np0005549633 systemd[1]: libpod-373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6.scope: Deactivated successfully.
Dec  7 14:52:49 np0005549633 podman[93597]: 2025-12-07 19:52:49.367523695 +0000 UTC m=+0.049244517 container died 373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6 (image=quay.io/ceph/ceph:v19, name=jolly_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 14:52:49 np0005549633 systemd[1]: var-lib-containers-storage-overlay-25b6f0e1aac4a016b46cab03248866878822467bbd45d30ca635222aba7f7921-merged.mount: Deactivated successfully.
Dec  7 14:52:49 np0005549633 podman[93597]: 2025-12-07 19:52:49.436674473 +0000 UTC m=+0.118395245 container remove 373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6 (image=quay.io/ceph/ceph:v19, name=jolly_tu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  7 14:52:49 np0005549633 systemd[1]: libpod-conmon-373bf07d56fca5b03d92421c92138fa4fbf703cd6fb0c51c03729113ced9d2a6.scope: Deactivated successfully.
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: from='client.? 192.168.122.100:0/3857978798' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jccdik", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jccdik", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jccdik", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:49 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.jccdik on compute-0
Dec  7 14:52:49 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.jccdik on compute-0
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v31: 105 pgs: 105 active+clean; 451 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 385 B/s wr, 3 op/s
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 12 completed events
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:52:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:52:50 np0005549633 podman[93701]: 2025-12-07 19:52:50.437932043 +0000 UTC m=+0.055475104 container create 274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  7 14:52:50 np0005549633 systemd[1]: Started libpod-conmon-274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668.scope.
Dec  7 14:52:50 np0005549633 podman[93701]: 2025-12-07 19:52:50.411011084 +0000 UTC m=+0.028554145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:50 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:50 np0005549633 podman[93701]: 2025-12-07 19:52:50.524020082 +0000 UTC m=+0.141563123 container init 274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  7 14:52:50 np0005549633 podman[93701]: 2025-12-07 19:52:50.531091381 +0000 UTC m=+0.148634402 container start 274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  7 14:52:50 np0005549633 dazzling_bohr[93731]: 167 167
Dec  7 14:52:50 np0005549633 podman[93701]: 2025-12-07 19:52:50.536172327 +0000 UTC m=+0.153715378 container attach 274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bohr, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  7 14:52:50 np0005549633 systemd[1]: libpod-274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668.scope: Deactivated successfully.
Dec  7 14:52:50 np0005549633 podman[93701]: 2025-12-07 19:52:50.537665307 +0000 UTC m=+0.155208368 container died 274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bohr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:50 np0005549633 systemd[1]: var-lib-containers-storage-overlay-627c5b2741c876cc83ea177a06d52c8de526198af32d45900366f9d66807c64d-merged.mount: Deactivated successfully.
Dec  7 14:52:50 np0005549633 podman[93701]: 2025-12-07 19:52:50.585589687 +0000 UTC m=+0.203132708 container remove 274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  7 14:52:50 np0005549633 systemd[1]: libpod-conmon-274c6b49bfabd4732dfa0113f4f04da707911b428c801eb3dba9c6c0f16f4668.scope: Deactivated successfully.
Dec  7 14:52:50 np0005549633 systemd[1]: Reloading.
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jccdik", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jccdik", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: Deploying daemon rgw.rgw.compute-0.jccdik on compute-0
Dec  7 14:52:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:50 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:52:50 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:52:50 np0005549633 systemd[1]: Reloading.
Dec  7 14:52:51 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:52:51 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:52:51 np0005549633 systemd[1]: Starting Ceph rgw.rgw.compute-0.jccdik for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:52:51 np0005549633 ansible-async_wrapper.py[93960]: Invoked with j978447436293 30 /home/zuul/.ansible/tmp/ansible-tmp-1765137170.50476-37525-191091984110531/AnsiballZ_command.py _
Dec  7 14:52:51 np0005549633 ansible-async_wrapper.py[93987]: Starting module and watcher
Dec  7 14:52:51 np0005549633 ansible-async_wrapper.py[93987]: Start watching 93988 (30)
Dec  7 14:52:51 np0005549633 ansible-async_wrapper.py[93988]: Start module (93988)
Dec  7 14:52:51 np0005549633 ansible-async_wrapper.py[93960]: Return async_wrapper task started.
Dec  7 14:52:51 np0005549633 python3[93989]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:51 np0005549633 podman[94014]: 2025-12-07 19:52:51.599798024 +0000 UTC m=+0.062510251 container create 02cad21fd326769bcebd14c8359ef3dc8af1e8f29bbdf1814b87507e13705708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-rgw-rgw-compute-0-jccdik, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  7 14:52:51 np0005549633 podman[94026]: 2025-12-07 19:52:51.656524309 +0000 UTC m=+0.061185406 container create c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd (image=quay.io/ceph/ceph:v19, name=beautiful_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  7 14:52:51 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce6d51f9e786316e8a87cef5d62b11ac167816f691b26b7507f0b6fc7689e58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:51 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce6d51f9e786316e8a87cef5d62b11ac167816f691b26b7507f0b6fc7689e58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:51 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce6d51f9e786316e8a87cef5d62b11ac167816f691b26b7507f0b6fc7689e58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:51 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce6d51f9e786316e8a87cef5d62b11ac167816f691b26b7507f0b6fc7689e58/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.jccdik supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:51 np0005549633 podman[94014]: 2025-12-07 19:52:51.579207214 +0000 UTC m=+0.041919451 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:51 np0005549633 podman[94014]: 2025-12-07 19:52:51.697658839 +0000 UTC m=+0.160371096 container init 02cad21fd326769bcebd14c8359ef3dc8af1e8f29bbdf1814b87507e13705708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-rgw-rgw-compute-0-jccdik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  7 14:52:51 np0005549633 podman[94014]: 2025-12-07 19:52:51.706855854 +0000 UTC m=+0.169568081 container start 02cad21fd326769bcebd14c8359ef3dc8af1e8f29bbdf1814b87507e13705708 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-rgw-rgw-compute-0-jccdik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Dec  7 14:52:51 np0005549633 bash[94014]: 02cad21fd326769bcebd14c8359ef3dc8af1e8f29bbdf1814b87507e13705708
Dec  7 14:52:51 np0005549633 systemd[1]: Started libpod-conmon-c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd.scope.
Dec  7 14:52:51 np0005549633 systemd[1]: Started Ceph rgw.rgw.compute-0.jccdik for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:52:51 np0005549633 podman[94026]: 2025-12-07 19:52:51.637606684 +0000 UTC m=+0.042267831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:51 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:51 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7cf25297efb4b2652c32a93a0c0fa49fd2ba466a03082633ebd37a3a135d20/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:51 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7cf25297efb4b2652c32a93a0c0fa49fd2ba466a03082633ebd37a3a135d20/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:51 np0005549633 podman[94026]: 2025-12-07 19:52:51.78866983 +0000 UTC m=+0.193330977 container init c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd (image=quay.io/ceph/ceph:v19, name=beautiful_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:52:51 np0005549633 radosgw[94049]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  7 14:52:51 np0005549633 radosgw[94049]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec  7 14:52:51 np0005549633 radosgw[94049]: framework: beast
Dec  7 14:52:51 np0005549633 radosgw[94049]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  7 14:52:51 np0005549633 radosgw[94049]: init_numa not setting numa affinity
Dec  7 14:52:51 np0005549633 podman[94026]: 2025-12-07 19:52:51.804064102 +0000 UTC m=+0.208725219 container start c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd (image=quay.io/ceph/ceph:v19, name=beautiful_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:51 np0005549633 podman[94026]: 2025-12-07 19:52:51.815760304 +0000 UTC m=+0.220421471 container attach c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd (image=quay.io/ceph/ceph:v19, name=beautiful_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:51 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 0f381602-97f0-43df-a9f3-b647ece09b5c (Updating rgw.rgw deployment (+3 -> 3))
Dec  7 14:52:51 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 0f381602-97f0-43df-a9f3-b647ece09b5c (Updating rgw.rgw deployment (+3 -> 3)) in 7 seconds
Dec  7 14:52:51 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:51 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:51 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 05b960e6-1043-479c-8d84-26463fa128b7 (Updating mds.cephfs deployment (+3 -> 3))
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoxbwj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoxbwj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoxbwj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:51 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.yoxbwj on compute-2
Dec  7 14:52:51 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.yoxbwj on compute-2
Dec  7 14:52:52 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v32: 105 pgs: 105 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 3.0 KiB/s wr, 178 op/s
Dec  7 14:52:52 np0005549633 radosgw[94049]: v1 topic migration: starting v1 topic migration..
Dec  7 14:52:52 np0005549633 radosgw[94049]: LDAP not started since no server URIs were provided in the configuration.
Dec  7 14:52:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-rgw-rgw-compute-0-jccdik[94043]: 2025-12-07T19:52:52.104+0000 7fb0c98bd980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  7 14:52:52 np0005549633 radosgw[94049]: v1 topic migration: finished v1 topic migration
Dec  7 14:52:52 np0005549633 radosgw[94049]: framework: beast
Dec  7 14:52:52 np0005549633 radosgw[94049]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  7 14:52:52 np0005549633 radosgw[94049]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  7 14:52:52 np0005549633 radosgw[94049]: starting handler: beast
Dec  7 14:52:52 np0005549633 radosgw[94049]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 14:52:52 np0005549633 radosgw[94049]: mgrc service_daemon_register rgw.14568 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.jccdik,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=8254bbd1-99ec-43a0-bc4e-be5da89d809b,zone_name=default,zonegroup_id=0dc3b6b1-84b6-469d-bd7f-01fe6725d4ff,zonegroup_name=default}
Dec  7 14:52:52 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 14:52:52 np0005549633 beautiful_bhaskara[94050]: 
Dec  7 14:52:52 np0005549633 beautiful_bhaskara[94050]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 14:52:52 np0005549633 systemd[1]: libpod-c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd.scope: Deactivated successfully.
Dec  7 14:52:52 np0005549633 podman[94026]: 2025-12-07 19:52:52.244416636 +0000 UTC m=+0.649077713 container died c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd (image=quay.io/ceph/ceph:v19, name=beautiful_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 14:52:52 np0005549633 systemd[1]: var-lib-containers-storage-overlay-3c7cf25297efb4b2652c32a93a0c0fa49fd2ba466a03082633ebd37a3a135d20-merged.mount: Deactivated successfully.
Dec  7 14:52:52 np0005549633 podman[94026]: 2025-12-07 19:52:52.286726056 +0000 UTC m=+0.691387133 container remove c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd (image=quay.io/ceph/ceph:v19, name=beautiful_bhaskara, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 14:52:52 np0005549633 ansible-async_wrapper.py[93988]: Module complete (93988)
Dec  7 14:52:52 np0005549633 systemd[1]: libpod-conmon-c5b1a326fadc9385f3c18585b531982d3efc603694fbdb5c99c6977adba04abd.scope: Deactivated successfully.
Dec  7 14:52:52 np0005549633 python3[94757]: ansible-ansible.legacy.async_status Invoked with jid=j978447436293.93960 mode=status _async_dir=/root/.ansible_async
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoxbwj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 14:52:52 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.yoxbwj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 14:52:53 np0005549633 python3[94806]: ansible-ansible.legacy.async_status Invoked with jid=j978447436293.93960 mode=cleanup _async_dir=/root/.ansible_async
Dec  7 14:52:53 np0005549633 python3[94832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:53 np0005549633 ceph-mon[74384]: Deploying daemon mds.cephfs.compute-2.yoxbwj on compute-2
Dec  7 14:52:53 np0005549633 podman[94833]: 2025-12-07 19:52:53.922072797 +0000 UTC m=+0.128934806 container create b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:52:53 np0005549633 systemd[1]: Started libpod-conmon-b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f.scope.
Dec  7 14:52:53 np0005549633 podman[94833]: 2025-12-07 19:52:53.899265418 +0000 UTC m=+0.106127457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:54 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:54 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9ca2c540fa66a4746c042e78d63c545b450fb95f81623f52a3b80f57e7355e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:54 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c9ca2c540fa66a4746c042e78d63c545b450fb95f81623f52a3b80f57e7355e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:54 np0005549633 podman[94833]: 2025-12-07 19:52:54.056737815 +0000 UTC m=+0.263599904 container init b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:52:54 np0005549633 podman[94833]: 2025-12-07 19:52:54.073984956 +0000 UTC m=+0.280846995 container start b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 14:52:54 np0005549633 podman[94833]: 2025-12-07 19:52:54.080026347 +0000 UTC m=+0.286888376 container attach b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:54 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v33: 105 pgs: 105 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 2.7 KiB/s wr, 176 op/s
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anvhxr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anvhxr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anvhxr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:54 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.anvhxr on compute-0
Dec  7 14:52:54 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.anvhxr on compute-0
Dec  7 14:52:54 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14580 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 14:52:54 np0005549633 wonderful_poitras[94848]: 
Dec  7 14:52:54 np0005549633 wonderful_poitras[94848]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  7 14:52:54 np0005549633 systemd[1]: libpod-b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f.scope: Deactivated successfully.
Dec  7 14:52:54 np0005549633 podman[94833]: 2025-12-07 19:52:54.533942194 +0000 UTC m=+0.740804203 container died b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  7 14:52:54 np0005549633 systemd[1]: var-lib-containers-storage-overlay-4c9ca2c540fa66a4746c042e78d63c545b450fb95f81623f52a3b80f57e7355e-merged.mount: Deactivated successfully.
Dec  7 14:52:54 np0005549633 podman[94833]: 2025-12-07 19:52:54.580770246 +0000 UTC m=+0.787632265 container remove b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f (image=quay.io/ceph/ceph:v19, name=wonderful_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:54 np0005549633 systemd[1]: libpod-conmon-b8319561c9a18f56957c98671f2c7fad2bc7795840edfb002cac319b2a09060f.scope: Deactivated successfully.
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e3 new map
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-07T19:52:54:858665+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:21.314295+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.yoxbwj{-1:24193} state up:standby seq 1 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 14:52:54 np0005549633 podman[94976]: 2025-12-07 19:52:54.872780907 +0000 UTC m=+0.061369411 container create a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jones, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] up:boot
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] as mds.0
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.yoxbwj assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.yoxbwj"} v 0)
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.yoxbwj"}]: dispatch
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e3 all = 0
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e4 new map
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-07T19:52:54:877848+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:54.877834+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:creating seq 1 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:creating}
Dec  7 14:52:54 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.yoxbwj is now active in filesystem cephfs as rank 0
Dec  7 14:52:54 np0005549633 systemd[1]: Started libpod-conmon-a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67.scope.
Dec  7 14:52:54 np0005549633 podman[94976]: 2025-12-07 19:52:54.843023022 +0000 UTC m=+0.031611576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:54 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:54 np0005549633 podman[94976]: 2025-12-07 19:52:54.984353918 +0000 UTC m=+0.172942432 container init a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jones, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:54 np0005549633 podman[94976]: 2025-12-07 19:52:54.995039254 +0000 UTC m=+0.183627768 container start a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:54 np0005549633 podman[94976]: 2025-12-07 19:52:54.998990969 +0000 UTC m=+0.187579523 container attach a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jones, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 14:52:55 np0005549633 jovial_jones[94993]: 167 167
Dec  7 14:52:55 np0005549633 systemd[1]: libpod-a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67.scope: Deactivated successfully.
Dec  7 14:52:55 np0005549633 podman[94976]: 2025-12-07 19:52:55.001687911 +0000 UTC m=+0.190276435 container died a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  7 14:52:55 np0005549633 systemd[1]: var-lib-containers-storage-overlay-b0c619dc26bde76d4ef637690dd26b3e8e4a9d63dc9c7a98bd39e31602a045a0-merged.mount: Deactivated successfully.
Dec  7 14:52:55 np0005549633 podman[94976]: 2025-12-07 19:52:55.054999645 +0000 UTC m=+0.243588149 container remove a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  7 14:52:55 np0005549633 systemd[1]: libpod-conmon-a16fa775206fb593513925a168879a7b6ec33ace61476b87c3e20a2b84085a67.scope: Deactivated successfully.
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anvhxr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.anvhxr", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: Deploying daemon mds.cephfs.compute-0.anvhxr on compute-0
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: daemon mds.cephfs.compute-2.yoxbwj assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: daemon mds.cephfs.compute-2.yoxbwj is now active in filesystem cephfs as rank 0
Dec  7 14:52:55 np0005549633 systemd[1]: Reloading.
Dec  7 14:52:55 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 13 completed events
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:55 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:52:55 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:52:55 np0005549633 systemd[1]: Reloading.
Dec  7 14:52:55 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:52:55 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:52:55 np0005549633 python3[95072]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:55 np0005549633 podman[95111]: 2025-12-07 19:52:55.696062342 +0000 UTC m=+0.055034541 container create d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826 (image=quay.io/ceph/ceph:v19, name=upbeat_poitras, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:55 np0005549633 podman[95111]: 2025-12-07 19:52:55.674504897 +0000 UTC m=+0.033477106 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:55 np0005549633 systemd[1]: Started libpod-conmon-d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826.scope.
Dec  7 14:52:55 np0005549633 systemd[1]: Starting Ceph mds.cephfs.compute-0.anvhxr for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:52:55 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:55 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8605ea883d7c0d7c5fead2f96a5fbe8105c6805d224d4601c3df6869955be5b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:55 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8605ea883d7c0d7c5fead2f96a5fbe8105c6805d224d4601c3df6869955be5b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:55 np0005549633 podman[95111]: 2025-12-07 19:52:55.855955634 +0000 UTC m=+0.214927883 container init d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826 (image=quay.io/ceph/ceph:v19, name=upbeat_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:55 np0005549633 podman[95111]: 2025-12-07 19:52:55.864887553 +0000 UTC m=+0.223859752 container start d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826 (image=quay.io/ceph/ceph:v19, name=upbeat_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:55 np0005549633 podman[95111]: 2025-12-07 19:52:55.872808745 +0000 UTC m=+0.231781004 container attach d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826 (image=quay.io/ceph/ceph:v19, name=upbeat_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e5 new map
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-07T19:52:55:888881+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:55.888878+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24193 members: 24193#012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:active seq 2 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] up:active
Dec  7 14:52:55 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:active}
Dec  7 14:52:56 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v34: 105 pgs: 105 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 2.7 KiB/s wr, 176 op/s
Dec  7 14:52:56 np0005549633 podman[95198]: 2025-12-07 19:52:56.158437916 +0000 UTC m=+0.070371902 container create 76396446a44300e60bee81eb08e90a7603cfa00b65689d6929ad960527c919b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mds-cephfs-compute-0-anvhxr, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  7 14:52:56 np0005549633 podman[95198]: 2025-12-07 19:52:56.127425457 +0000 UTC m=+0.039359473 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:52:56 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48309332583c799758a52a76377ccc222e1ef145d81cc36ef6dae80cce738a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:56 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48309332583c799758a52a76377ccc222e1ef145d81cc36ef6dae80cce738a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:56 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48309332583c799758a52a76377ccc222e1ef145d81cc36ef6dae80cce738a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:56 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab48309332583c799758a52a76377ccc222e1ef145d81cc36ef6dae80cce738a/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.anvhxr supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:56 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14586 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 14:52:56 np0005549633 upbeat_poitras[95129]: 
Dec  7 14:52:56 np0005549633 upbeat_poitras[95129]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  7 14:52:56 np0005549633 podman[95198]: 2025-12-07 19:52:56.254662356 +0000 UTC m=+0.166596382 container init 76396446a44300e60bee81eb08e90a7603cfa00b65689d6929ad960527c919b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mds-cephfs-compute-0-anvhxr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  7 14:52:56 np0005549633 podman[95198]: 2025-12-07 19:52:56.26602873 +0000 UTC m=+0.177962756 container start 76396446a44300e60bee81eb08e90a7603cfa00b65689d6929ad960527c919b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mds-cephfs-compute-0-anvhxr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:52:56 np0005549633 bash[95198]: 76396446a44300e60bee81eb08e90a7603cfa00b65689d6929ad960527c919b4
Dec  7 14:52:56 np0005549633 systemd[1]: libpod-d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826.scope: Deactivated successfully.
Dec  7 14:52:56 np0005549633 podman[95111]: 2025-12-07 19:52:56.280225179 +0000 UTC m=+0.639197348 container died d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826 (image=quay.io/ceph/ceph:v19, name=upbeat_poitras, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:56 np0005549633 systemd[1]: Started Ceph mds.cephfs.compute-0.anvhxr for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:52:56 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8605ea883d7c0d7c5fead2f96a5fbe8105c6805d224d4601c3df6869955be5b8-merged.mount: Deactivated successfully.
Dec  7 14:52:56 np0005549633 podman[95111]: 2025-12-07 19:52:56.337301904 +0000 UTC m=+0.696274063 container remove d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826 (image=quay.io/ceph/ceph:v19, name=upbeat_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  7 14:52:56 np0005549633 ceph-mds[95220]: set uid:gid to 167:167 (ceph:ceph)
Dec  7 14:52:56 np0005549633 ceph-mds[95220]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec  7 14:52:56 np0005549633 ceph-mds[95220]: main not setting numa affinity
Dec  7 14:52:56 np0005549633 ceph-mds[95220]: pidfile_write: ignore empty --pid-file
Dec  7 14:52:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mds-cephfs-compute-0-anvhxr[95214]: starting mds.cephfs.compute-0.anvhxr at 
Dec  7 14:52:56 np0005549633 systemd[1]: libpod-conmon-d36b7c8a4454c3b24cc6241f88db57d223f9f06aed1cfd84e38018b809259826.scope: Deactivated successfully.
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:52:56 np0005549633 ceph-mds[95220]: mds.cephfs.compute-0.anvhxr Updating MDS map to version 5 from mon.0
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bbacsi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bbacsi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bbacsi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:56 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:56 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.bbacsi on compute-1
Dec  7 14:52:56 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.bbacsi on compute-1
Dec  7 14:52:56 np0005549633 ansible-async_wrapper.py[93987]: Done in kid B.
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e6 new map
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-12-07T19:52:57:239432+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:55.888878+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24193 members: 24193#012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:active seq 2 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.anvhxr{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 14:52:57 np0005549633 ceph-mds[95220]: mds.cephfs.compute-0.anvhxr Updating MDS map to version 6 from mon.0
Dec  7 14:52:57 np0005549633 ceph-mds[95220]: mds.cephfs.compute-0.anvhxr Monitors have assigned me to become a standby
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] up:boot
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:active} 1 up:standby
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.anvhxr"} v 0)
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.anvhxr"}]: dispatch
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e6 all = 0
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e7 new map
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-12-07T19:52:57:256217+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:55.888878+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24193 members: 24193#012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:active seq 2 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.anvhxr{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:active} 1 up:standby
Dec  7 14:52:57 np0005549633 python3[95277]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bbacsi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.bbacsi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: Deploying daemon mds.cephfs.compute-1.bbacsi on compute-1
Dec  7 14:52:57 np0005549633 podman[95279]: 2025-12-07 19:52:57.415168005 +0000 UTC m=+0.088323979 container create 6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff (image=quay.io/ceph/ceph:v19, name=amazing_lumiere, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:52:57 np0005549633 systemd[1]: Started libpod-conmon-6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff.scope.
Dec  7 14:52:57 np0005549633 podman[95279]: 2025-12-07 19:52:57.383177161 +0000 UTC m=+0.056333175 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:57 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12702a412367057cb53a1f882290607294d759da3cdf7258d27f43522897a13/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a12702a412367057cb53a1f882290607294d759da3cdf7258d27f43522897a13/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:57 np0005549633 podman[95279]: 2025-12-07 19:52:57.52022121 +0000 UTC m=+0.193377244 container init 6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff (image=quay.io/ceph/ceph:v19, name=amazing_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:57 np0005549633 podman[95279]: 2025-12-07 19:52:57.531606454 +0000 UTC m=+0.204762398 container start 6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff (image=quay.io/ceph/ceph:v19, name=amazing_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:52:57 np0005549633 podman[95279]: 2025-12-07 19:52:57.535882498 +0000 UTC m=+0.209038532 container attach 6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff (image=quay.io/ceph/ceph:v19, name=amazing_lumiere, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:57 np0005549633 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  7 14:52:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:52:57 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='client.14598 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  7 14:52:57 np0005549633 amazing_lumiere[95295]: 
Dec  7 14:52:57 np0005549633 amazing_lumiere[95295]: [{"container_id": "ca360f912e5e", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.11%", "created": "2025-12-07T19:49:22.196723Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-07T19:52:22.338992Z", "memory_usage": 7803502, "ports": [], "service_name": "crash", "started": "2025-12-07T19:49:22.053166Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@crash.compute-0", "version": "19.2.3"}, {"container_id": "ff95b7b20574", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.39%", "created": "2025-12-07T19:50:00.767592Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-07T19:52:22.177025Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2025-12-07T19:50:00.690034Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@crash.compute-1", "version": "19.2.3"}, {"container_id": "dbbb00f0221f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.34%", "created": "2025-12-07T19:51:18.585673Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-07T19:52:22.287036Z", "memory_usage": 7825522, "ports": [], "service_name": "crash", "started": "2025-12-07T19:51:18.229122Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.anvhxr", "daemon_name": "mds.cephfs.compute-0.anvhxr", "daemon_type": "mds", "events": ["2025-12-07T19:52:56.388949Z daemon:mds.cephfs.compute-0.anvhxr [INFO] \"Deployed mds.cephfs.compute-0.anvhxr on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.yoxbwj", "daemon_name": "mds.cephfs.compute-2.yoxbwj", "daemon_type": "mds", "events": ["2025-12-07T19:52:54.097747Z daemon:mds.cephfs.compute-2.yoxbwj [INFO] \"Deployed mds.cephfs.compute-2.yoxbwj on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "a557fd32ab2d", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "25.14%", "created": "2025-12-07T19:48:39.286566Z", "daemon_id": "compute-0.dyzcyj", "daemon_name": "mgr.compute-0.dyzcyj", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-07T19:52:22.338839Z", "memory_usage": 544210944, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-07T19:48:39.178231Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@mgr.compute-0.dyzcyj", "version": "19.2.3"}, {"container_id": "b265effb5499", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "32.68%", "created": "2025-12-07T19:51:08.166229Z", "daemon_id": "compute-1.cgejnh", "daemon_name": "mgr.compute-1.cgejnh", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-07T19:52:22.177322Z", "memory_usage": 504574771, "ports": [8765], "service_name": "mgr", "started": "2025-12-07T19:51:08.031953Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@mgr.compute-1.cgejnh", "version": "19.2.3"}, {"container_id": "b9a06358a7d9", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "34.25%", "created": "2025-12-07T19:51:06.200298Z", "daemon_id": "compute-2.orbdku", "daemon_name": "mgr.compute-2.orbdku", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-07T19:52:22.286960Z", "memory_usage": 505308774, "ports": [8765], "service_name": "mgr", "started": "2025-12-07T19:51:06.060599Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@mgr.compute-2.orbdku", "version": "19.2.3"}, {"container_id": "a36e06099c02", "container_image_digests": ["quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee", "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.79%", "created": "2025-12-07T19:48:35.376463Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-07T19:52:22.338636Z", "memory_request": 2147483648, "memory_usage": 60072919, "ports": [], "service_name": "mon", "started": "2025-12-07T19:48:37.441602Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@mon.compute-0", "version": "19.2.3"}, {"container_id": "72fcd421a1ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.55%", "created": "2025-12-07T19:50:52.032812Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-07T19:52:22.177205Z", "memory_request": 2147483648, "memory_usage": 49251614, "ports": [], "service_name": "mon", "started": "2025-12-07T19:50:51.907883Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-a8ac706f-8288-541e-8e56-e1124d9b483d@mon.compute-1", "version": "19.2.3"}, {"container_id": "487d67f3677b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "qu
Dec  7 14:52:57 np0005549633 systemd[1]: libpod-6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff.scope: Deactivated successfully.
Dec  7 14:52:58 np0005549633 podman[95321]: 2025-12-07 19:52:58.049188695 +0000 UTC m=+0.053804999 container died 6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff (image=quay.io/ceph/ceph:v19, name=amazing_lumiere, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  7 14:52:58 np0005549633 rsyslogd[1005]: message too long (13716) with configured size 8096, begin of message is: [{"container_id": "ca360f912e5e", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  7 14:52:58 np0005549633 systemd[1]: var-lib-containers-storage-overlay-a12702a412367057cb53a1f882290607294d759da3cdf7258d27f43522897a13-merged.mount: Deactivated successfully.
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v35: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 3.8 KiB/s wr, 261 op/s
Dec  7 14:52:58 np0005549633 podman[95321]: 2025-12-07 19:52:58.103803523 +0000 UTC m=+0.108419747 container remove 6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff (image=quay.io/ceph/ceph:v19, name=amazing_lumiere, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:58 np0005549633 systemd[1]: libpod-conmon-6654e933e7ff3663b55ee1d23e4ee7f66e4bb4a1f66445d79fbdf979b57753ff.scope: Deactivated successfully.
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 05b960e6-1043-479c-8d84-26463fa128b7 (Updating mds.cephfs deployment (+3 -> 3))
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 05b960e6-1043-479c-8d84-26463fa128b7 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 8acd6423-44fb-4d70-a5cf-70ebfcb281d9 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.amjvcc
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.amjvcc
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e8 new map
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-12-07T19:52:58:375282+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:55.888878+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24193 members: 24193#012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:active seq 2 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.anvhxr{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.bbacsi{-1:24194} state up:standby seq 1 addr [v2:192.168.122.101:6804/3776962482,v1:192.168.122.101:6805/3776962482] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3776962482,v1:192.168.122.101:6805/3776962482] up:boot
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:active} 2 up:standby
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.bbacsi"} v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.bbacsi"}]: dispatch
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e8 all = 0
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.amjvcc-rgw
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.amjvcc-rgw
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.amjvcc's ganesha conf is defaulting to empty
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.amjvcc's ganesha conf is defaulting to empty
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:52:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.amjvcc on compute-1
Dec  7 14:52:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.amjvcc on compute-1
Dec  7 14:52:59 np0005549633 python3[95397]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:52:59 np0005549633 podman[95398]: 2025-12-07 19:52:59.158021652 +0000 UTC m=+0.073980686 container create d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae (image=quay.io/ceph/ceph:v19, name=agitated_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  7 14:52:59 np0005549633 systemd[1]: Started libpod-conmon-d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae.scope.
Dec  7 14:52:59 np0005549633 podman[95398]: 2025-12-07 19:52:59.128825193 +0000 UTC m=+0.044784287 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:52:59 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:52:59 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ef51e2e4367c8c65277cc59976ca39e9ec7fd88e54341eaf8419287f5bb359/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:59 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5ef51e2e4367c8c65277cc59976ca39e9ec7fd88e54341eaf8419287f5bb359/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: Creating key for client.nfs.cephfs.0.0.compute-1.amjvcc
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: Rados config object exists: conf-nfs.cephfs
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: Creating key for client.nfs.cephfs.0.0.compute-1.amjvcc-rgw
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.amjvcc-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: Bind address in nfs.cephfs.0.0.compute-1.amjvcc's ganesha conf is defaulting to empty
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: Deploying daemon nfs.cephfs.0.0.compute-1.amjvcc on compute-1
Dec  7 14:52:59 np0005549633 podman[95398]: 2025-12-07 19:52:59.275663514 +0000 UTC m=+0.191622608 container init d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae (image=quay.io/ceph/ceph:v19, name=agitated_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  7 14:52:59 np0005549633 podman[95398]: 2025-12-07 19:52:59.28377462 +0000 UTC m=+0.199733664 container start d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae (image=quay.io/ceph/ceph:v19, name=agitated_johnson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  7 14:52:59 np0005549633 podman[95398]: 2025-12-07 19:52:59.288817424 +0000 UTC m=+0.204776508 container attach d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae (image=quay.io/ceph/ceph:v19, name=agitated_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e9 new map
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-12-07T19:52:59:636544+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:58.921065+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24193 members: 24193#012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.anvhxr{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.bbacsi{-1:24194} state up:standby seq 1 addr [v2:192.168.122.101:6804/3776962482,v1:192.168.122.101:6805/3776962482] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] up:active
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:active} 2 up:standby
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  7 14:52:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2682051024' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  7 14:52:59 np0005549633 agitated_johnson[95413]: 
Dec  7 14:52:59 np0005549633 agitated_johnson[95413]: {"fsid":"a8ac706f-8288-541e-8e56-e1124d9b483d","health":{"status":"HEALTH_WARN","checks":{"BLUESTORE_SLOW_OP_ALERT":{"severity":"HEALTH_WARN","summary":{"message":"1 OSD(s) experiencing slow operations in BlueStore","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":116,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":53,"num_osds":3,"num_up_osds":3,"osd_up_since":1765137120,"num_in_osds":3,"osd_in_since":1765137082,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":105}],"num_pgs":105,"num_pools":12,"num_objects":219,"data_bytes":467432,"bytes_used":89088000,"bytes_avail":64322838528,"bytes_total":64411926528,"read_bytes_sec":149025,"write_bytes_sec":3923,"read_op_per_sec":165,"write_op_per_sec":96},"fsmap":{"epoch":9,"btime":"2025-12-07T19:52:59:636544+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.yoxbwj","status":"up:active","gid":24193}],"up:standby":2},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":7,"modified":"2025-12-07T19:52:54.098719+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.dyzcyj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.cgejnh":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.orbdku":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14568":{"start_epoch":7,"start_stamp":"2025-12-07T19:52:52.213946+0000","gid":14568,"addr":"192.168.122.100:0/1667733722","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.jccdik","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"8254bbd1-99ec-43a0-bc4e-be5da89d809b","zone_name":"default","zonegroup_id":"0dc3b6b1-84b6-469d-bd7f-01fe6725d4ff","zonegroup_name":"default"},"task_status":{}},"24181":{"start_epoch":5,"start_stamp":"2025-12-07T19:52:47.465428+0000","gid":24181,"addr":"192.168.122.102:0/1425368907","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.hgnhva","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"8254bbd1-99ec-43a0-bc4e-be5da89d809b","zone_name":"default","zonegroup_id":"0dc3b6b1-84b6-469d-bd7f-01fe6725d4ff","zonegroup_name":"default"},"task_status":{}},"24182":{"start_epoch":6,"start_stamp":"2025-12-07T19:52:49.473790+0000","gid":24182,"addr":"192.168.122.101:0/363289442","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.whvyeq","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"8254bbd1-99ec-43a0-bc4e-be5da89d809b","zone_name":"default","zonegroup_id":"0dc3b6b1-84b6-469d-bd7f-01fe6725d4ff","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"05b960e6-1043-479c-8d84-26463fa128b7":{"message":"Updating mds.cephfs deployment (+3 -> 3) (4s)\n      [==================..........] (remaining: 2s)","progress":0.66666668653488159,"add_to_ceph_s":true}}}
Dec  7 14:52:59 np0005549633 systemd[1]: libpod-d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae.scope: Deactivated successfully.
Dec  7 14:52:59 np0005549633 podman[95398]: 2025-12-07 19:52:59.75001306 +0000 UTC m=+0.665972104 container died d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae (image=quay.io/ceph/ceph:v19, name=agitated_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:52:59 np0005549633 systemd[1]: var-lib-containers-storage-overlay-b5ef51e2e4367c8c65277cc59976ca39e9ec7fd88e54341eaf8419287f5bb359-merged.mount: Deactivated successfully.
Dec  7 14:52:59 np0005549633 podman[95398]: 2025-12-07 19:52:59.799001188 +0000 UTC m=+0.714960202 container remove d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae (image=quay.io/ceph/ceph:v19, name=agitated_johnson, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:52:59 np0005549633 systemd[1]: libpod-conmon-d86ea1f4b75aee6283f1e3e4d02b72f922e2df184dbdbbff530ee09e7a75baae.scope: Deactivated successfully.
Dec  7 14:53:00 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v36: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 144 KiB/s rd, 3.8 KiB/s wr, 260 op/s
Dec  7 14:53:00 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 14 completed events
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:00 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.celnmz
Dec  7 14:53:00 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.celnmz
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 14:53:00 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  7 14:53:00 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:53:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:53:00 np0005549633 python3[95473]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:53:00 np0005549633 podman[95475]: 2025-12-07 19:53:00.867844748 +0000 UTC m=+0.060349702 container create 87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769 (image=quay.io/ceph/ceph:v19, name=bold_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:53:00 np0005549633 systemd[1]: Started libpod-conmon-87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769.scope.
Dec  7 14:53:00 np0005549633 podman[95475]: 2025-12-07 19:53:00.848307936 +0000 UTC m=+0.040812920 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:53:00 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:53:00 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a9eeae5000c80e164c9d288df6d51886de8612b35a986b559b5b9922ce2cf3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:00 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99a9eeae5000c80e164c9d288df6d51886de8612b35a986b559b5b9922ce2cf3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:00 np0005549633 podman[95475]: 2025-12-07 19:53:00.984635036 +0000 UTC m=+0.177140070 container init 87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769 (image=quay.io/ceph/ceph:v19, name=bold_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  7 14:53:00 np0005549633 podman[95475]: 2025-12-07 19:53:00.995041784 +0000 UTC m=+0.187546768 container start 87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769 (image=quay.io/ceph/ceph:v19, name=bold_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:53:00 np0005549633 podman[95475]: 2025-12-07 19:53:00.999641087 +0000 UTC m=+0.192146071 container attach 87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769 (image=quay.io/ceph/ceph:v19, name=bold_saha, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e10 new map
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2025-12-07T19:53:01:249343+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:58.921065+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24193 members: 24193#012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.anvhxr{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.bbacsi{-1:24194} state up:standby seq 1 addr [v2:192.168.122.101:6804/3776962482,v1:192.168.122.101:6805/3776962482] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 14:53:01 np0005549633 ceph-mds[95220]: mds.cephfs.compute-0.anvhxr Updating MDS map to version 10 from mon.0
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] up:standby
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:active} 2 up:standby
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936392514' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  7 14:53:01 np0005549633 bold_saha[95506]: 
Dec  7 14:53:01 np0005549633 systemd[1]: libpod-87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769.scope: Deactivated successfully.
Dec  7 14:53:01 np0005549633 bold_saha[95506]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.dyzcyj/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.cgejnh/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.orbdku/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502921113","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.jccdik","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.whvyeq","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.hgnhva","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  7 14:53:01 np0005549633 podman[95475]: 2025-12-07 19:53:01.382047257 +0000 UTC m=+0.574552241 container died 87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769 (image=quay.io/ceph/ceph:v19, name=bold_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  7 14:53:01 np0005549633 systemd[1]: var-lib-containers-storage-overlay-99a9eeae5000c80e164c9d288df6d51886de8612b35a986b559b5b9922ce2cf3-merged.mount: Deactivated successfully.
Dec  7 14:53:01 np0005549633 podman[95475]: 2025-12-07 19:53:01.439481471 +0000 UTC m=+0.631986455 container remove 87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769 (image=quay.io/ceph/ceph:v19, name=bold_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 14:53:01 np0005549633 systemd[1]: libpod-conmon-87a72c5f677fee3f213eaf3634001d93e65a4c4b6bf36dea6762d58e0efb4769.scope: Deactivated successfully.
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: Creating key for client.nfs.cephfs.1.0.compute-2.celnmz
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 14:53:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 14:53:02 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v37: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 4.6 KiB/s wr, 262 op/s
Dec  7 14:53:02 np0005549633 python3[95568]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:53:02 np0005549633 podman[95569]: 2025-12-07 19:53:02.609934174 +0000 UTC m=+0.075065684 container create e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e (image=quay.io/ceph/ceph:v19, name=great_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  7 14:53:02 np0005549633 systemd[1]: Started libpod-conmon-e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e.scope.
Dec  7 14:53:02 np0005549633 podman[95569]: 2025-12-07 19:53:02.580032456 +0000 UTC m=+0.045164006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:53:02 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:53:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93d1ef3d26bf84c0509580ca159592334c9c0923e8563a397acfc69402cab4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a93d1ef3d26bf84c0509580ca159592334c9c0923e8563a397acfc69402cab4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e11 new map
Dec  7 14:53:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).mds e11 print_map#012e11#012btime 2025-12-07T19:53:02:677932+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-07T19:52:21.314295+0000#012modified#0112025-12-07T19:52:58.921065+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24193}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24193 members: 24193#012[mds.cephfs.compute-2.yoxbwj{0:24193} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/104560910,v1:192.168.122.102:6805/104560910] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.anvhxr{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2188517156,v1:192.168.122.100:6807/2188517156] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.bbacsi{-1:24194} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3776962482,v1:192.168.122.101:6805/3776962482] compat {c=[1],r=[1],i=[1fff]}]
Dec  7 14:53:02 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/3776962482,v1:192.168.122.101:6805/3776962482] up:standby
Dec  7 14:53:02 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.yoxbwj=up:active} 2 up:standby
Dec  7 14:53:02 np0005549633 podman[95569]: 2025-12-07 19:53:02.71837378 +0000 UTC m=+0.183505290 container init e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e (image=quay.io/ceph/ceph:v19, name=great_napier, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec  7 14:53:02 np0005549633 podman[95569]: 2025-12-07 19:53:02.725990653 +0000 UTC m=+0.191122133 container start e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e (image=quay.io/ceph/ceph:v19, name=great_napier, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  7 14:53:02 np0005549633 podman[95569]: 2025-12-07 19:53:02.729723753 +0000 UTC m=+0.194855233 container attach e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e (image=quay.io/ceph/ceph:v19, name=great_napier, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 14:53:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/487537846' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  7 14:53:03 np0005549633 great_napier[95584]: mimic
Dec  7 14:53:03 np0005549633 systemd[1]: libpod-e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e.scope: Deactivated successfully.
Dec  7 14:53:03 np0005549633 podman[95569]: 2025-12-07 19:53:03.108131227 +0000 UTC m=+0.573262697 container died e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e (image=quay.io/ceph/ceph:v19, name=great_napier, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  7 14:53:03 np0005549633 systemd[1]: var-lib-containers-storage-overlay-a93d1ef3d26bf84c0509580ca159592334c9c0923e8563a397acfc69402cab4e-merged.mount: Deactivated successfully.
Dec  7 14:53:03 np0005549633 podman[95569]: 2025-12-07 19:53:03.162236992 +0000 UTC m=+0.627368502 container remove e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e (image=quay.io/ceph/ceph:v19, name=great_napier, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:53:03 np0005549633 systemd[1]: libpod-conmon-e3da8820ff6c52b5f7f7f9aec289946a98dc44e7e0857b55de65a7d0ca2d054e.scope: Deactivated successfully.
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.celnmz-rgw
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.celnmz-rgw
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.celnmz's ganesha conf is defaulting to empty
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.celnmz's ganesha conf is defaulting to empty
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:53:03 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.celnmz on compute-2
Dec  7 14:53:03 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.celnmz on compute-2
Dec  7 14:53:04 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v38: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 87 op/s
Dec  7 14:53:04 np0005549633 python3[95665]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:53:04 np0005549633 podman[95666]: 2025-12-07 19:53:04.416937305 +0000 UTC m=+0.027483325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:53:04 np0005549633 podman[95666]: 2025-12-07 19:53:04.584754166 +0000 UTC m=+0.195300216 container create 484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06 (image=quay.io/ceph/ceph:v19, name=recursing_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Dec  7 14:53:04 np0005549633 systemd[1]: Started libpod-conmon-484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06.scope.
Dec  7 14:53:04 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:53:04 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7278d4967c362cf5981686388b2cb1198781224f92b7bdb85f5d8d316f0969c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:04 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7278d4967c362cf5981686388b2cb1198781224f92b7bdb85f5d8d316f0969c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:04 np0005549633 ceph-mon[74384]: Rados config object exists: conf-nfs.cephfs
Dec  7 14:53:04 np0005549633 ceph-mon[74384]: Creating key for client.nfs.cephfs.1.0.compute-2.celnmz-rgw
Dec  7 14:53:04 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:53:04 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.celnmz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:53:04 np0005549633 ceph-mon[74384]: Bind address in nfs.cephfs.1.0.compute-2.celnmz's ganesha conf is defaulting to empty
Dec  7 14:53:04 np0005549633 ceph-mon[74384]: Deploying daemon nfs.cephfs.1.0.compute-2.celnmz on compute-2
Dec  7 14:53:04 np0005549633 podman[95666]: 2025-12-07 19:53:04.719097173 +0000 UTC m=+0.329643293 container init 484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06 (image=quay.io/ceph/ceph:v19, name=recursing_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  7 14:53:04 np0005549633 podman[95666]: 2025-12-07 19:53:04.727690673 +0000 UTC m=+0.338236733 container start 484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06 (image=quay.io/ceph/ceph:v19, name=recursing_proskuriakova, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:53:04 np0005549633 podman[95666]: 2025-12-07 19:53:04.733110758 +0000 UTC m=+0.343656778 container attach 484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06 (image=quay.io/ceph/ceph:v19, name=recursing_proskuriakova, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  7 14:53:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec  7 14:53:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3525849405' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec  7 14:53:05 np0005549633 recursing_proskuriakova[95681]: 
Dec  7 14:53:05 np0005549633 recursing_proskuriakova[95681]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Dec  7 14:53:05 np0005549633 systemd[1]: libpod-484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06.scope: Deactivated successfully.
Dec  7 14:53:05 np0005549633 podman[95666]: 2025-12-07 19:53:05.226980464 +0000 UTC m=+0.837526504 container died 484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06 (image=quay.io/ceph/ceph:v19, name=recursing_proskuriakova, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:53:05 np0005549633 systemd[1]: var-lib-containers-storage-overlay-c7278d4967c362cf5981686388b2cb1198781224f92b7bdb85f5d8d316f0969c-merged.mount: Deactivated successfully.
Dec  7 14:53:05 np0005549633 podman[95666]: 2025-12-07 19:53:05.283885694 +0000 UTC m=+0.894431734 container remove 484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06 (image=quay.io/ceph/ceph:v19, name=recursing_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:53:05 np0005549633 systemd[1]: libpod-conmon-484edd44652fb3203cbc7c2d21a50155d6049cb1b72f5c3f87f5333dab23de06.scope: Deactivated successfully.
Dec  7 14:53:06 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v39: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 87 op/s
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.tkfndb
Dec  7 14:53:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.tkfndb
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 14:53:06 np0005549633 ceph-mgr[74680]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  7 14:53:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:53:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: Creating key for client.nfs.cephfs.2.0.compute-0.tkfndb
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  7 14:53:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:53:08 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v40: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.2 KiB/s wr, 89 op/s
Dec  7 14:53:09 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  7 14:53:09 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 14:53:10 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v41: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Dec  7 14:53:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.tkfndb-rgw
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.tkfndb-rgw
Dec  7 14:53:11 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  7 14:53:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:53:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.tkfndb's ganesha conf is defaulting to empty
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.tkfndb's ganesha conf is defaulting to empty
Dec  7 14:53:11 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  7 14:53:11 np0005549633 ceph-mon[74384]: log_channel(audit) log [DBG] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.tkfndb on compute-0
Dec  7 14:53:11 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.tkfndb on compute-0
Dec  7 14:53:12 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v42: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Dec  7 14:53:12 np0005549633 podman[95844]: 2025-12-07 19:53:12.333949323 +0000 UTC m=+0.073030811 container create 7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_burnell, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Dec  7 14:53:12 np0005549633 systemd[1]: Started libpod-conmon-7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6.scope.
Dec  7 14:53:12 np0005549633 podman[95844]: 2025-12-07 19:53:12.302159774 +0000 UTC m=+0.041241302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: Rados config object exists: conf-nfs.cephfs
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: Creating key for client.nfs.cephfs.2.0.compute-0.tkfndb-rgw
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.tkfndb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: Bind address in nfs.cephfs.2.0.compute-0.tkfndb's ganesha conf is defaulting to empty
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: Deploying daemon nfs.cephfs.2.0.compute-0.tkfndb on compute-0
Dec  7 14:53:12 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:53:12 np0005549633 podman[95844]: 2025-12-07 19:53:12.4484463 +0000 UTC m=+0.187527758 container init 7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_burnell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  7 14:53:12 np0005549633 podman[95844]: 2025-12-07 19:53:12.460974335 +0000 UTC m=+0.200055783 container start 7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_burnell, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  7 14:53:12 np0005549633 podman[95844]: 2025-12-07 19:53:12.465634299 +0000 UTC m=+0.204715747 container attach 7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  7 14:53:12 np0005549633 vibrant_burnell[95861]: 167 167
Dec  7 14:53:12 np0005549633 systemd[1]: libpod-7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6.scope: Deactivated successfully.
Dec  7 14:53:12 np0005549633 podman[95844]: 2025-12-07 19:53:12.469524123 +0000 UTC m=+0.208605571 container died 7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:53:12 np0005549633 systemd[1]: var-lib-containers-storage-overlay-02a37c79a51c36beeb2bd6677f50fe1a5a2d338a95715a37d368bdbb252ed233-merged.mount: Deactivated successfully.
Dec  7 14:53:12 np0005549633 podman[95844]: 2025-12-07 19:53:12.525167699 +0000 UTC m=+0.264249187 container remove 7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  7 14:53:12 np0005549633 systemd[1]: libpod-conmon-7902ac71e1ace930752ee101b87f7e2c8f30fda21f1eb4ebef479f2dc8189cf6.scope: Deactivated successfully.
Dec  7 14:53:12 np0005549633 systemd[1]: Reloading.
Dec  7 14:53:12 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:53:12 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:53:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:53:12 np0005549633 systemd[1]: Reloading.
Dec  7 14:53:13 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:53:13 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:53:13 np0005549633 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.tkfndb for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:53:13 np0005549633 podman[96004]: 2025-12-07 19:53:13.630695708 +0000 UTC m=+0.078876097 container create f6972ffed0e83c3b514ab9a6b86cb292784ac599aabfb4955f3cd539c79ff04d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:53:13 np0005549633 podman[96004]: 2025-12-07 19:53:13.596214688 +0000 UTC m=+0.044395137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  7 14:53:13 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f3f1e7ea0ff5fced36eb8cce7fccb0b0b04537ca79d57db1b10ba83d7826ac/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:13 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f3f1e7ea0ff5fced36eb8cce7fccb0b0b04537ca79d57db1b10ba83d7826ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:13 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f3f1e7ea0ff5fced36eb8cce7fccb0b0b04537ca79d57db1b10ba83d7826ac/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:13 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f3f1e7ea0ff5fced36eb8cce7fccb0b0b04537ca79d57db1b10ba83d7826ac/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.tkfndb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:13 np0005549633 podman[96004]: 2025-12-07 19:53:13.741440196 +0000 UTC m=+0.189620595 container init f6972ffed0e83c3b514ab9a6b86cb292784ac599aabfb4955f3cd539c79ff04d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:53:13 np0005549633 podman[96004]: 2025-12-07 19:53:13.750753494 +0000 UTC m=+0.198933863 container start f6972ffed0e83c3b514ab9a6b86cb292784ac599aabfb4955f3cd539c79ff04d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  7 14:53:13 np0005549633 bash[96004]: f6972ffed0e83c3b514ab9a6b86cb292784ac599aabfb4955f3cd539c79ff04d
Dec  7 14:53:13 np0005549633 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.tkfndb for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  7 14:53:13 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=0
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec  7 14:53:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 14:53:14 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v43: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 852 B/s wr, 2 op/s
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:14 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 8acd6423-44fb-4d70-a5cf-70ebfcb281d9 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  7 14:53:14 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 8acd6423-44fb-4d70-a5cf-70ebfcb281d9 (Updating nfs.cephfs deployment (+3 -> 3)) in 16 seconds
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:14 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev ec1017a4-8f88-4277-86e0-85a2f5cf5925 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec  7 14:53:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:14 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.uuuzrv on compute-1
Dec  7 14:53:14 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.uuuzrv on compute-1
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: Deploying daemon haproxy.nfs.cephfs.compute-1.uuuzrv on compute-1
Dec  7 14:53:15 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 15 completed events
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:53:15 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:16 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v44: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 853 B/s wr, 2 op/s
Dec  7 14:53:16 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:53:18 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v45: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  7 14:53:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:53:19 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:19 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:53:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:53:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.cpclff on compute-0
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.cpclff on compute-0
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:53:20
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] do_upmap
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'images', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'backups', 'volumes', 'default.rgw.meta', '.mgr']
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] prepared 0/10 upmap changes
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v46: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec  7 14:53:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:53:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:53:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:53:20 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:20 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbd0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: Deploying daemon haproxy.nfs.cephfs.compute-0.cpclff on compute-0
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  7 14:53:21 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 4fb66d9d-5c7e-4d79-a357-5a3f1342cf5f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec  7 14:53:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  7 14:53:22 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 653233e8-0971-4d34-a8e8-d4822d79fa77 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:22 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v49: 105 pgs: 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:22 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  7 14:53:23 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev b869d246-5d57-4976-930a-c390d013be17 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Dec  7 14:53:23 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:24 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v51: 151 pgs: 46 unknown, 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  7 14:53:24 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:24 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  7 14:53:24 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 2b940c24-eada-49bb-8fc8-c527ece3b40e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:25 np0005549633 podman[96168]: 2025-12-07 19:53:25.050108257 +0000 UTC m=+4.279281735 container create d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f (image=quay.io/ceph/haproxy:2.3, name=thirsty_shockley)
Dec  7 14:53:25 np0005549633 systemd[1]: Started libpod-conmon-d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f.scope.
Dec  7 14:53:25 np0005549633 podman[96168]: 2025-12-07 19:53:25.028930621 +0000 UTC m=+4.258104129 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 14:53:25 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:53:25 np0005549633 podman[96168]: 2025-12-07 19:53:25.160796062 +0000 UTC m=+4.389969600 container init d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f (image=quay.io/ceph/haproxy:2.3, name=thirsty_shockley)
Dec  7 14:53:25 np0005549633 podman[96168]: 2025-12-07 19:53:25.172012052 +0000 UTC m=+4.401185550 container start d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f (image=quay.io/ceph/haproxy:2.3, name=thirsty_shockley)
Dec  7 14:53:25 np0005549633 podman[96168]: 2025-12-07 19:53:25.177228261 +0000 UTC m=+4.406401759 container attach d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f (image=quay.io/ceph/haproxy:2.3, name=thirsty_shockley)
Dec  7 14:53:25 np0005549633 thirsty_shockley[96281]: 0 0
Dec  7 14:53:25 np0005549633 systemd[1]: libpod-d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f.scope: Deactivated successfully.
Dec  7 14:53:25 np0005549633 podman[96168]: 2025-12-07 19:53:25.182392889 +0000 UTC m=+4.411566397 container died d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f (image=quay.io/ceph/haproxy:2.3, name=thirsty_shockley)
Dec  7 14:53:25 np0005549633 systemd[1]: var-lib-containers-storage-overlay-31a191febd210818f91dd9a02971c811e5649f63fad9be338f3a975123be1d66-merged.mount: Deactivated successfully.
Dec  7 14:53:25 np0005549633 podman[96168]: 2025-12-07 19:53:25.243385337 +0000 UTC m=+4.472558845 container remove d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f (image=quay.io/ceph/haproxy:2.3, name=thirsty_shockley)
Dec  7 14:53:25 np0005549633 systemd[1]: libpod-conmon-d8f94e4801be2d36f7f8a2eafed2b3ae8f122e6d3af00d05e85d72dbb72f961f.scope: Deactivated successfully.
Dec  7 14:53:25 np0005549633 ceph-mgr[74680]: [progress WARNING root] Starting Global Recovery Event,77 pgs not in active + clean state
Dec  7 14:53:25 np0005549633 systemd[1]: Reloading.
Dec  7 14:53:25 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:53:25 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  7 14:53:25 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev a4073c3c-366e-434e-99cc-85e6c742614a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 57 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=57 pruub=12.199860573s) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active pruub 201.139938354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 57 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=57 pruub=12.199860573s) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown pruub 201.139938354s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:25 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.17( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.15( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.a( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.6( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.11( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.13( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.14( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.10( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.b( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.9( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.f( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.1( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.8( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.4( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.1a( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.e( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.2( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.d( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.7( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.c( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.3( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.12( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.5( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.18( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.19( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.16( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.1b( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.1c( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.1d( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.1e( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 58 pg[7.1f( empty local-lis/les=24/25 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:26 np0005549633 systemd[1]: Reloading.
Dec  7 14:53:26 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v54: 182 pgs: 77 unknown, 105 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:26 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:53:26 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:53:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:26 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:26 np0005549633 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.cpclff for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:53:26 np0005549633 podman[96426]: 2025-12-07 19:53:26.786352817 +0000 UTC m=+0.062045267 container create b8e4b8d0b734345d34b340f6a7237c7040cd2f88995599741bdbda00e6860991 (image=quay.io/ceph/haproxy:2.3, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff)
Dec  7 14:53:26 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00892d62f96422c6ff54faa8b28e72d49e2d002f63d32bb0a02107872f3c0232/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:26 np0005549633 podman[96426]: 2025-12-07 19:53:26.756823909 +0000 UTC m=+0.032516399 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  7 14:53:26 np0005549633 podman[96426]: 2025-12-07 19:53:26.864220947 +0000 UTC m=+0.139913467 container init b8e4b8d0b734345d34b340f6a7237c7040cd2f88995599741bdbda00e6860991 (image=quay.io/ceph/haproxy:2.3, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff)
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  7 14:53:26 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev dfa8207d-7d38-4e21-8764-d74b42b10107 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  7 14:53:26 np0005549633 podman[96426]: 2025-12-07 19:53:26.87669638 +0000 UTC m=+0.152388830 container start b8e4b8d0b734345d34b340f6a7237c7040cd2f88995599741bdbda00e6860991 (image=quay.io/ceph/haproxy:2.3, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff)
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:26 np0005549633 bash[96426]: b8e4b8d0b734345d34b340f6a7237c7040cd2f88995599741bdbda00e6860991
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [NOTICE] 340/195326 (2) : New worker #1 (4) forked
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.1e( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [WARNING] 340/195326 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 14:53:26 np0005549633 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.cpclff for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.1d( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.16( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.1a( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.c( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.4( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.12( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.3( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.15( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.1f( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.a( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.10( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.11( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.13( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.17( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.b( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.14( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.1c( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.9( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.6( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.7( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.8( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.1( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.5( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.2( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.f( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.0( empty local-lis/les=57/59 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.e( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.d( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.18( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.19( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 59 pg[7.1b( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=24/24 les/c/f=25/25/0 sis=57) [1] r=0 lpr=57 pi=[24,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  7 14:53:26 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:27 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.vjhjhu on compute-2
Dec  7 14:53:27 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.vjhjhu on compute-2
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  7 14:53:27 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 15d0c22f-309c-460a-9b8a-8562ba674737 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:27 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:27 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:27 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec  7 14:53:28 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v57: 244 pgs: 62 unknown, 32 peering, 150 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  7 14:53:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:53:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:53:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:28 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:28 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbb4000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:28 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:28 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  7 14:53:28 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  7 14:53:28 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 7a09d346-1797-4d3f-83b6-25ad54bfadca (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 4fb66d9d-5c7e-4d79-a357-5a3f1342cf5f (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 4fb66d9d-5c7e-4d79-a357-5a3f1342cf5f (PG autoscaler increasing pool 5 PGs from 1 to 32) in 8 seconds
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 653233e8-0971-4d34-a8e8-d4822d79fa77 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 653233e8-0971-4d34-a8e8-d4822d79fa77 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 7 seconds
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev b869d246-5d57-4976-930a-c390d013be17 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event b869d246-5d57-4976-930a-c390d013be17 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 6 seconds
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 2b940c24-eada-49bb-8fc8-c527ece3b40e (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 2b940c24-eada-49bb-8fc8-c527ece3b40e (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev a4073c3c-366e-434e-99cc-85e6c742614a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event a4073c3c-366e-434e-99cc-85e6c742614a (PG autoscaler increasing pool 9 PGs from 1 to 32) in 4 seconds
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev dfa8207d-7d38-4e21-8764-d74b42b10107 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event dfa8207d-7d38-4e21-8764-d74b42b10107 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 15d0c22f-309c-460a-9b8a-8562ba674737 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 15d0c22f-309c-460a-9b8a-8562ba674737 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 7a09d346-1797-4d3f-83b6-25ad54bfadca (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  7 14:53:29 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 7a09d346-1797-4d3f-83b6-25ad54bfadca (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: Deploying daemon haproxy.nfs.cephfs.compute-2.vjhjhu on compute-2
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:29 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 61 pg[10.0( v 53'1163 (0'0,53'1163] local-lis/les=47/48 n=178 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.698176384s) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 53'1162 mlcod 53'1162 active pruub 201.244171143s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 61 pg[10.0( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=61 pruub=8.698176384s) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 53'1162 mlcod 0'0 unknown pruub 201.244171143s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107bc848 space 0x563b10806d10 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a3b08 space 0x563b10806350 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a32e8 space 0x563b107e7940 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a3608 space 0x563b10806280 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b0028 space 0x563b10806420 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b0fbc8488 space 0x563b0fc69870 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107db888 space 0x563b107e65c0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b0ac8 space 0x563b10807390 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b0de8 space 0x563b10727d50 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107dbe28 space 0x563b107e7bb0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a27a8 space 0x563b108065c0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107bc488 space 0x563b10806010 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b12e8 space 0x563b10806830 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b0528 space 0x563b108064f0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b103b5b08 space 0x563b107e61b0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a3d88 space 0x563b107e7c80 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107bd928 space 0x563b107e7a10 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107caa28 space 0x563b10807530 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b17e8 space 0x563b103b2de0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b105887a8 space 0x563b10578d10 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b0fc6cf28 space 0x563b108060e0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a2488 space 0x563b107276d0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a2ac8 space 0x563b10727940 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b104c6708 space 0x563b107e7600 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b102afd88 space 0x563b103b8b70 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b0708 space 0x563b10806760 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107b1108 space 0x563b10806690 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107bd428 space 0x563b107e7ae0 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107ca028 space 0x563b10807120 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b0fbc82a8 space 0x563b107e7870 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x563b10f298c0) operator()   moving buffer(0x563b107a5388 space 0x563b10807050 0x0~1000 clean)
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Dec  7 14:53:29 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Dec  7 14:53:30 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v59: 306 pgs: 124 unknown, 32 peering, 150 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 481 B/s rd, 0 op/s
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:30 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:30 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:30 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 23 completed events
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:53:30 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:30 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.13( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.10( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.11( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.17( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1b( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.18( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.7( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.9( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.e( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.12( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1f( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1e( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1d( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1a( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.19( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1c( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[12.0( v 53'3 (0'0,53'3] local-lis/les=51/52 n=3 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=11.935274124s) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 53'2 mlcod 53'2 active pruub 205.379272461s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.5( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.4( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.b( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.8( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.6( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.a( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.c( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.d( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.f( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.2( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.14( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.3( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.15( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.16( v 53'1163 lc 0'0 (0'0,53'1163] local-lis/les=47/48 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.10( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[12.0( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=11.935274124s) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 53'2 mlcod 0'0 unknown pruub 205.379272461s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.5( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.4( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.0( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 53'1162 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 62 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=47/47 les/c/f=48/48/0 sis=61) [1] r=0 lpr=61 pi=[47,61)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec  7 14:53:30 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec  7 14:53:31 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  7 14:53:31 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  7 14:53:31 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  7 14:53:32 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v61: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:53:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:32 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbb4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:32 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:32 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  7 14:53:32 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  7 14:53:33 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  7 14:53:33 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.15( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.16( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.17( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1d( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1e( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1( v 53'3 (0'0,53'3] local-lis/les=51/52 n=1 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.7( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.f( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.8( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.11( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.14( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.19( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.18( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1b( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1a( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1c( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1f( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.3( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=1 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.2( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=1 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.d( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.e( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.c( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.a( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.b( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.9( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.6( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.4( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.5( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.12( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.13( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.10( v 53'3 lc 0'0 (0'0,53'3] local-lis/les=51/52 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.15( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.16( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1d( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.17( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1e( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1( v 53'3 (0'0,53'3] local-lis/les=62/63 n=1 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.f( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.7( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.8( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.14( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.19( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.18( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1b( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1a( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1c( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.11( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.0( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 53'2 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.1f( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.3( v 53'3 (0'0,53'3] local-lis/les=62/63 n=1 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.2( v 53'3 (0'0,53'3] local-lis/les=62/63 n=1 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.d( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.c( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.b( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.a( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.9( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.6( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.5( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.12( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.10( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.13( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.e( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 63 pg[12.4( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=53'3 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Dec  7 14:53:33 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Dec  7 14:53:34 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:53:34 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:34 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:34 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:34 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0025c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:34 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec  7 14:53:34 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec  7 14:53:35 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  7 14:53:35 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  7 14:53:36 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 1 peering, 31 unknown, 305 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:53:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:36 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbb4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:36 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:36 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 14:53:36 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  7 14:53:36 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  7 14:53:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:53:37 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  7 14:53:37 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  7 14:53:38 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 0 op/s
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:38 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:38 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:38 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:38 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:38 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec  7 14:53:38 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec  7 14:53:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  7 14:53:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:39 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 14:53:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:39 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 14:53:39 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  7 14:53:39 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  7 14:53:40 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v66: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 735 B/s rd, 0 op/s
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:40 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbb4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:40 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:40 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event be926d64-3678-412c-a86e-e14bc8cf0dfa (Global Recovery Event) in 15 seconds
Dec  7 14:53:40 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  7 14:53:40 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  7 14:53:41 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  7 14:53:41 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  7 14:53:42 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  7 14:53:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:42 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:42 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0032d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:42 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 14:53:42 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec  7 14:53:42 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:43 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.12( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.19( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.1d( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.1b( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.4( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.7( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.5( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.8( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.14( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.18( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.573817253s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973937988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.18( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.573785782s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973937988s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1b( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.573261261s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973937988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1b( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.573221207s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973937988s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.13( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.766285896s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.167037964s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.13( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.766267776s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.167037964s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.051283836s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451965332s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.10( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.050894737s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451965332s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.10( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.765595436s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.167037964s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.10( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.765546799s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.167037964s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.12( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.765345573s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166961670s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.12( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.765321732s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166961670s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.e( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.572091103s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973876953s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.e( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.572056770s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973876953s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.050041199s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451980591s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.4( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.765312195s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.167297363s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.f( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.571717262s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973709106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.4( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.765289307s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.167297363s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.049974442s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451980591s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.f( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.571685791s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973709106s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.6( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764746666s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166931152s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.6( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764728546s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166931152s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.2( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.571409225s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973648071s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.2( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.571389198s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973648071s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.049537659s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451812744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.049513817s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451812744s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.9( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764489174s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166870117s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.9( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764473915s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166870117s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.b( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764229774s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166641235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.b( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764186859s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166641235s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.a( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764332771s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166870117s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.a( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.764275551s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166870117s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.048982620s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451828003s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.048964500s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451828003s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.c( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.763746262s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166625977s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.c( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.763722420s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166625977s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.5( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.570631981s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973632812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.5( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.570590973s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973632812s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.e( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.763556480s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166641235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.6( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.570219994s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973403931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.048375130s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451614380s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.6( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.570188522s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973403931s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.e( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.763521194s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166641235s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.048333168s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451614380s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.2( v 53'3 (0'0,53'3] local-lis/les=62/63 n=1 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.763250351s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166549683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.2( v 53'3 (0'0,53'3] local-lis/les=62/63 n=1 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.763233185s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166549683s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.9( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.569931984s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973358154s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.9( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.569904327s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973358154s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.8( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.569661140s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973571777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.5( v 63'1166 (0'0,63'1166] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.047673225s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=62'1164 lcod 62'1165 mlcod 62'1165 active pruub 217.451629639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.8( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.569644928s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973571777s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.3( v 53'3 (0'0,53'3] local-lis/les=62/63 n=1 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.762553215s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166534424s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.5( v 63'1166 (0'0,63'1166] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.047636986s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=62'1164 lcod 62'1165 mlcod 0'0 unknown NOTIFY pruub 217.451629639s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.3( v 53'3 (0'0,53'3] local-lis/les=62/63 n=1 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.762525558s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166534424s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.b( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.568533897s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973129272s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.b( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.568514824s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973129272s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.046906471s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451553345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.046890259s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451553345s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.18( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1c( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.761211395s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166275024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.14( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.568111420s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.973205566s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1c( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.761191368s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166275024s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.14( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.568086624s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.973205566s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.10( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.567546844s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.972839355s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1a( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.760841370s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166152954s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.11( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.567467690s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.972824097s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.11( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.567448616s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.972824097s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1a( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.760800362s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166152954s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.046215057s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451538086s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[8.17( empty local-lis/les=0/0 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.046022415s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451538086s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.13( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.567064285s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.972839355s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.13( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.567045212s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.972839355s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.18( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.760188103s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166015625s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.18( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.760169029s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166015625s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.19( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.760009766s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.165985107s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.045416832s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451492310s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1f( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565956116s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.972076416s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.10( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.566741943s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.972839355s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.045377731s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451492310s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1f( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565937042s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.972076416s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.3( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565837860s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.972076416s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.3( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565811157s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.972076416s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.4( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565386772s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.971694946s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.19( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.759654999s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.165985107s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.4( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565340996s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.971694946s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.045119286s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451522827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.045084000s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451522827s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.044860840s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451370239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.044845581s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451370239s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.8( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.759390831s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.165908813s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.a( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565037727s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.971572876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.a( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.565019608s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.971572876s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.8( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.759349823s) [0] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.165908813s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=64) [1] r=0 lpr=64 pi=[61,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.7( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758841515s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.165893555s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1e( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758435249s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.165527344s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.7( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758793831s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.165893555s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1e( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758419037s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.165527344s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.16( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.564381599s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.971588135s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.043982506s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451217651s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.044013977s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451278687s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1d( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758225441s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.165512085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.1d( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758208275s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.165512085s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.043999672s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451278687s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.16( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.564337730s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.971588135s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.11( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758819580s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.166275024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.043798447s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451217651s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.11( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.758778572s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.166275024s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.043748856s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451293945s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.043730736s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451293945s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.17( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.757924080s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 active pruub 220.165527344s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1d( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.563890457s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.971511841s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1d( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.563875198s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.971511841s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[12.17( v 53'3 (0'0,53'3] local-lis/les=62/63 n=0 ec=62/51 lis/c=62/62 les/c/f=63/63/0 sis=64 pruub=13.757884026s) [2] r=-1 lpr=64 pi=[62,64)/1 crt=53'3 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 220.165527344s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.039056778s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.446762085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.039044380s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.446762085s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1e( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.554442406s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 active pruub 221.962234497s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[7.1e( empty local-lis/les=57/59 n=0 ec=57/24 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=15.554374695s) [0] r=-1 lpr=64 pi=[57,64)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 221.962234497s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.043133736s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451324463s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=64 pruub=11.043114662s) [2] r=-1 lpr=64 pi=[61,64)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451324463s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec  7 14:53:43 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.16( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.11( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.10( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.1( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.18( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.f( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.2( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.1f( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.9( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.1c( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.15( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.1b( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 64 pg[5.7( empty local-lis/les=0/0 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v69: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:44 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbb4002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:44 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.013994217s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451980591s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.013798714s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451980591s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.013227463s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451934814s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.013112068s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451934814s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.012644768s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451812744s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.012622833s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451812744s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.5( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.5( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.7( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.7( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.011841774s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451766968s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.011787415s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451766968s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1d( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1d( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.011392593s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451644897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.011356354s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451644897s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.010678291s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451385498s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.010663986s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451385498s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.010231018s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451370239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.010206223s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451370239s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.009872437s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 217.451522827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.1f( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/49 lis/c=61/61 les/c/f=62/62/0 sis=65) [1]/[0] r=-1 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=10.009842873s) [0] r=-1 lpr=65 pi=[61,65)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 217.451522827s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.1( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[6.6( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[6.a( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.11( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.16( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.7( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.10( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.18( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.2( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.15( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.15( v 46'6 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.1c( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.14( v 53'44 (0'0,53'44] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.17( v 53'44 (0'0,53'44] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.1b( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[6.e( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.f( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[5.9( empty local-lis/les=64/65 n=0 ec=56/20 lis/c=56/56 les/c/f=57/57/0 sis=64) [1] r=0 lpr=64 pi=[56,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.e( v 46'6 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.f( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.8( v 53'44 (0'0,53'44] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.6( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=64/65 n=1 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.1b( v 53'44 lc 53'8 (0'0,53'44] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.4( v 53'44 (0'0,53'44] local-lis/les=64/65 n=1 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.18( v 53'44 lc 53'19 (0'0,53'44] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[6.2( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.d( v 46'6 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.10( v 63'47 lc 62'46 (0'0,63'47] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=63'47 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.a( v 46'6 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.11( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.19( v 53'44 (0'0,53'44] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.12( v 46'6 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[8.12( v 53'44 (0'0,53'44] local-lis/les=64/65 n=0 ec=59/42 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=53'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 65 pg[9.10( v 46'6 lc 0'0 (0'0,46'6] local-lis/les=64/65 n=0 ec=59/45 lis/c=59/59 les/c/f=60/60/0 sis=64) [1] r=0 lpr=64 pi=[59,64)/1 crt=46'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  7 14:53:44 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  7 14:53:45 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 24 completed events
Dec  7 14:53:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:53:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  7 14:53:45 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec  7 14:53:45 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec  7 14:53:46 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v71: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1 op/s
Dec  7 14:53:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  7 14:53:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 14:53:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  7 14:53:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 14:53:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:46 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:46 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:46 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Dec  7 14:53:46 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Dec  7 14:53:47 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec  7 14:53:47 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec  7 14:53:48 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 4 peering, 12 remapped+peering, 8 unknown, 1 active+clean+scrubbing+deep, 312 active+clean; 455 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 195 B/s, 0 objects/s recovering
Dec  7 14:53:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:48 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbb4002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:48 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:48 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  7 14:53:48 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  7 14:53:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [WARNING] 340/195348 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-mgr[74680]: [progress WARNING root] Starting Global Recovery Event,101 pgs not in active + clean state
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.5( v 63'1166 (0'0,63'1166] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=62'1164 lcod 62'1165 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.5( v 63'1166 (0'0,63'1166] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=62'1164 lcod 62'1165 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[6.2( v 53'39 (0'0,53'39] local-lis/les=65/66 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=53'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[6.a( v 53'39 (0'0,53'39] local-lis/les=65/66 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=53'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[6.6( v 53'39 lc 0'0 (0'0,53'39] local-lis/les=65/66 n=1 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=53'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 66 pg[6.e( v 53'39 lc 53'19 (0'0,53'39] local-lis/les=65/66 n=1 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=65) [1] r=0 lpr=65 pi=[56,65)/1 crt=53'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec  7 14:53:49 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v74: 337 pgs: 2 active+clean+scrubbing, 57 peering, 12 remapped+peering, 32 unknown, 234 active+clean; 454 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 229 B/s, 1 objects/s recovering
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:53:50 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:50 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:53:50 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:50 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc003fe0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:50 np0005549633 python3[96480]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.dnkqzx on compute-1
Dec  7 14:53:50 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.dnkqzx on compute-1
Dec  7 14:53:50 np0005549633 podman[96481]: 2025-12-07 19:53:50.556271267 +0000 UTC m=+0.063351492 container create c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64 (image=quay.io/ceph/ceph:v19, name=hardcore_borg, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:50 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[6.b( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.12( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.12( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1a( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1a( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.f( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.f( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1e( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1e( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1c( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1c( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1d( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1d( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1b( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.7( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.7( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.4( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.4( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1b( v 53'120 (0'0,53'120] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.5( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.5( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[6.7( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[6.3( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.1( v 53'120 (0'0,53'120] local-lis/les=0/0 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[6.f( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.14( v 66'126 (0'0,66'126] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 luod=0'0 crt=63'123 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[11.14( v 66'126 (0'0,66'126] local-lis/les=0/0 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=63'123 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 systemd[1]: Started libpod-conmon-c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64.scope.
Dec  7 14:53:50 np0005549633 podman[96481]: 2025-12-07 19:53:50.53389417 +0000 UTC m=+0.040974385 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:53:50 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be609db1c3704ead1303fe4217a94e340f27bf11336cfada9315e49b2444500d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:50 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be609db1c3704ead1303fe4217a94e340f27bf11336cfada9315e49b2444500d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.5( v 63'1166 (0'0,63'1166] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=63'1166 lcod 62'1165 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [0]/[1] async=[0] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 67 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:50 np0005549633 podman[96481]: 2025-12-07 19:53:50.694870128 +0000 UTC m=+0.201950443 container init c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64 (image=quay.io/ceph/ceph:v19, name=hardcore_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  7 14:53:50 np0005549633 podman[96481]: 2025-12-07 19:53:50.70580405 +0000 UTC m=+0.212884275 container start c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64 (image=quay.io/ceph/ceph:v19, name=hardcore_borg, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  7 14:53:50 np0005549633 podman[96481]: 2025-12-07 19:53:50.711063261 +0000 UTC m=+0.218143496 container attach c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64 (image=quay.io/ceph/ceph:v19, name=hardcore_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:53:50 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:50 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbb4002f50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: Deploying daemon keepalived.nfs.cephfs.compute-1.dnkqzx on compute-1
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  7 14:53:51 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.821568489s) [0] async=[0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.726272583s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.820856094s) [0] async=[0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.726242065s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.a( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.820918083s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.726272583s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.820796967s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.726242065s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.756820679s) [0] async=[0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.663345337s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.12( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.756755829s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.663345337s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.819636345s) [0] async=[0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.726486206s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.1a( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.819577217s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.726486206s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.819291115s) [0] async=[0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.726287842s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=14.819244385s) [0] r=-1 lpr=68 pi=[61,68)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.726287842s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[6.7( v 53'39 lc 53'21 (0'0,53'39] local-lis/les=67/68 n=1 ec=56/22 lis/c=64/56 les/c/f=66/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=53'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.f( v 53'120 (0'0,53'120] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[6.3( v 53'39 lc 0'0 (0'0,53'39] local-lis/les=67/68 n=2 ec=56/22 lis/c=64/56 les/c/f=66/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=53'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.14( v 66'126 (0'0,66'126] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=66'126 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.4( v 53'120 (0'0,53'120] local-lis/les=67/68 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.1a( v 53'120 (0'0,53'120] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[6.f( v 53'39 lc 53'1 (0'0,53'39] local-lis/les=67/68 n=3 ec=56/22 lis/c=64/56 les/c/f=66/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=53'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=1,(0+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.1e( v 53'120 (0'0,53'120] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.1( v 53'120 (0'0,53'120] local-lis/les=67/68 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[6.b( v 53'39 lc 0'0 (0'0,53'39] local-lis/les=67/68 n=1 ec=56/22 lis/c=64/56 les/c/f=66/57/0 sis=67) [1] r=0 lpr=67 pi=[56,67)/2 crt=53'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.1d( v 53'120 (0'0,53'120] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.7( v 53'120 (0'0,53'120] local-lis/les=67/68 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.12( v 53'120 (0'0,53'120] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.5( v 53'120 (0'0,53'120] local-lis/les=67/68 n=1 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.1b( v 53'120 (0'0,53'120] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 68 pg[11.1c( v 53'120 (0'0,53'120] local-lis/les=67/68 n=0 ec=61/49 lis/c=65/61 les/c/f=66/62/0 sis=67) [1] r=0 lpr=67 pi=[61,67)/1 crt=53'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:53:51 np0005549633 hardcore_borg[96497]: could not fetch user info: no user info saved
Dec  7 14:53:52 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 1 active+recovering+remapped, 19 active+recovery_wait+remapped, 2 active+clean+scrubbing, 12 active+remapped, 65 peering, 238 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 103/231 objects misplaced (44.589%); 338 B/s, 6 objects/s recovering
Dec  7 14:53:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:52 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:52 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  7 14:53:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  7 14:53:52 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  7 14:53:52 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 69 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=69 pruub=13.775138855s) [0] async=[0] r=-1 lpr=69 pi=[61,69)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.726303101s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:52 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 69 pg[10.2( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=69 pruub=13.775061607s) [0] r=-1 lpr=69 pi=[61,69)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.726303101s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:52 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  7 14:53:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  7 14:53:53 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  7 14:53:53 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 70 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=70 pruub=12.765875816s) [2] async=[2] r=-1 lpr=70 pi=[61,70)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.726699829s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:53 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 70 pg[10.3( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=70 pruub=12.765789032s) [2] r=-1 lpr=70 pi=[61,70)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.726699829s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:53 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 70 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=70 pruub=12.763731956s) [0] async=[0] r=-1 lpr=70 pi=[61,70)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 229.726211548s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:53:53 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 70 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=70 pruub=12.763642311s) [0] r=-1 lpr=70 pi=[61,70)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.726211548s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:53:53 np0005549633 systemd[1]: libpod-c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64.scope: Deactivated successfully.
Dec  7 14:53:53 np0005549633 podman[96587]: 2025-12-07 19:53:53.990467267 +0000 UTC m=+0.038315885 container died c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64 (image=quay.io/ceph/ceph:v19, name=hardcore_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:53:54 np0005549633 systemd[1]: var-lib-containers-storage-overlay-be609db1c3704ead1303fe4217a94e340f27bf11336cfada9315e49b2444500d-merged.mount: Deactivated successfully.
Dec  7 14:53:54 np0005549633 podman[96587]: 2025-12-07 19:53:54.06061217 +0000 UTC m=+0.108460738 container remove c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64 (image=quay.io/ceph/ceph:v19, name=hardcore_borg, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  7 14:53:54 np0005549633 systemd[1]: libpod-conmon-c09f32225b29400fd3aa77ccfefae0d8c3c6bfb881f3f334788d5b15b4615a64.scope: Deactivated successfully.
Dec  7 14:53:54 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v80: 337 pgs: 1 active+recovering+remapped, 19 active+recovery_wait+remapped, 2 active+clean+scrubbing, 12 active+remapped, 65 peering, 238 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 5.7 KiB/s wr, 528 op/s; 103/231 objects misplaced (44.589%); 115 B/s, 8 objects/s recovering
Dec  7 14:53:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:54 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:54 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002f00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:54 np0005549633 python3[96627]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid a8ac706f-8288-541e-8e56-e1124d9b483d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  7 14:53:54 np0005549633 podman[96628]: 2025-12-07 19:53:54.535145251 +0000 UTC m=+0.078275351 container create 094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9 (image=quay.io/ceph/ceph:v19, name=flamboyant_faraday, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  7 14:53:54 np0005549633 systemd[1]: Started libpod-conmon-094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9.scope.
Dec  7 14:53:54 np0005549633 podman[96628]: 2025-12-07 19:53:54.503328621 +0000 UTC m=+0.046458771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  7 14:53:54 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:53:54 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8e70700b44a5ae0ec745680559196171bc8283d92c4d0157bec5651411bfbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:54 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8e70700b44a5ae0ec745680559196171bc8283d92c4d0157bec5651411bfbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:53:54 np0005549633 podman[96628]: 2025-12-07 19:53:54.647381427 +0000 UTC m=+0.190511667 container init 094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9 (image=quay.io/ceph/ceph:v19, name=flamboyant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  7 14:53:54 np0005549633 podman[96628]: 2025-12-07 19:53:54.659253885 +0000 UTC m=+0.202383995 container start 094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9 (image=quay.io/ceph/ceph:v19, name=flamboyant_faraday, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  7 14:53:54 np0005549633 podman[96628]: 2025-12-07 19:53:54.663896349 +0000 UTC m=+0.207026459 container attach 094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9 (image=quay.io/ceph/ceph:v19, name=flamboyant_faraday, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  7 14:53:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  7 14:53:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:54 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:56 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v81: 337 pgs: 1 active+recovering+remapped, 17 active+recovery_wait+remapped, 5 active+remapped, 17 peering, 297 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 4.1 KiB/s wr, 380 op/s; 94/231 objects misplaced (40.693%); 183 B/s, 8 objects/s recovering
Dec  7 14:53:56 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  7 14:53:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:56 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba80016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:56 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:56 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [WARNING] 340/195357 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 14:53:58 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 1 active+recovering+remapped, 13 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 319 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 64/227 objects misplaced (28.194%); 208 B/s, 1 keys/s, 4 objects/s recovering
Dec  7 14:53:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:58 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:58 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba80016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:53:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:53:58 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:00 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v83: 337 pgs: 1 active+recovering+remapped, 13 active+recovery_wait+remapped, 2 active+remapped, 2 peering, 319 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 64/227 objects misplaced (28.194%); 164 B/s, 1 keys/s, 3 objects/s recovering
Dec  7 14:54:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:00 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:00 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  7 14:54:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 71 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=71 pruub=14.151591301s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.726715088s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 71 pg[10.15( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=71 pruub=14.151491165s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.726715088s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 71 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=71 pruub=14.150915146s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.726776123s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 71 pg[10.d( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=71 pruub=14.150740623s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.726776123s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:00 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  7 14:54:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  7 14:54:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:00 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba80016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.hbjfrz on compute-0
Dec  7 14:54:01 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.hbjfrz on compute-0
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]: {
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "user_id": "openstack",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "display_name": "openstack",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "email": "",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "suspended": 0,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "max_buckets": 1000,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "subusers": [],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "keys": [
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        {
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:            "user": "openstack",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:            "access_key": "YK9G33SZR0IQ6TUW9FMG",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:            "secret_key": "RBvytlJAVXL3pgg7Iv9BukerRmbTmXnAHr8utIzl",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:            "active": true,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:            "create_date": "2025-12-07T19:54:01.103270Z"
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        }
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    ],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "swift_keys": [],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "caps": [],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "op_mask": "read, write, delete",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "default_placement": "",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "default_storage_class": "",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "placement_tags": [],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "bucket_quota": {
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "enabled": false,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "check_on_raw": false,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "max_size": -1,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "max_size_kb": 0,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "max_objects": -1
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    },
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "user_quota": {
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "enabled": false,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "check_on_raw": false,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "max_size": -1,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "max_size_kb": 0,
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:        "max_objects": -1
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    },
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "temp_url_keys": [],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "type": "rgw",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "mfa_ids": [],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "account_id": "",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "path": "/",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "create_date": "2025-12-07T19:54:01.102761Z",
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "tags": [],
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]:    "group_ids": []
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]: }
Dec  7 14:54:01 np0005549633 flamboyant_faraday[96644]: 
Dec  7 14:54:01 np0005549633 podman[96628]: 2025-12-07 19:54:01.197755874 +0000 UTC m=+6.740885944 container died 094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9 (image=quay.io/ceph/ceph:v19, name=flamboyant_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  7 14:54:01 np0005549633 systemd[1]: libpod-094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9.scope: Deactivated successfully.
Dec  7 14:54:01 np0005549633 systemd[1]: var-lib-containers-storage-overlay-7e8e70700b44a5ae0ec745680559196171bc8283d92c4d0157bec5651411bfbd-merged.mount: Deactivated successfully.
Dec  7 14:54:01 np0005549633 podman[96628]: 2025-12-07 19:54:01.256346909 +0000 UTC m=+6.799477019 container remove 094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9 (image=quay.io/ceph/ceph:v19, name=flamboyant_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  7 14:54:01 np0005549633 systemd[1]: libpod-conmon-094eda3195f5e1e4fd17501d38642eae982ffee4b25bc27f5f1151f2290a2eb9.scope: Deactivated successfully.
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  7 14:54:01 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  7 14:54:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 72 pg[10.5( v 67'1169 (0'0,67'1169] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=72 pruub=12.878068924s) [2] async=[2] r=-1 lpr=72 pi=[61,72)/1 crt=63'1166 lcod 67'1168 mlcod 67'1168 active pruub 237.752944946s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 72 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=72 pruub=12.877946854s) [2] async=[2] r=-1 lpr=72 pi=[61,72)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.752960205s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 72 pg[10.5( v 67'1169 (0'0,67'1169] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=72 pruub=12.877889633s) [2] r=-1 lpr=72 pi=[61,72)/1 crt=63'1166 lcod 67'1168 mlcod 0'0 unknown NOTIFY pruub 237.752944946s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 72 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=72 pruub=12.877872467s) [2] r=-1 lpr=72 pi=[61,72)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.752960205s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 72 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=72 pruub=12.877243042s) [2] async=[2] r=-1 lpr=72 pi=[61,72)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.752929688s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 72 pg[10.1d( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=72 pruub=12.877180099s) [2] r=-1 lpr=72 pi=[61,72)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.752929688s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:01 np0005549633 python3[96838]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:54:02 np0005549633 ceph-mgr[74680]: [dashboard INFO request] [192.168.122.100:49090] [GET] [200] [0.142s] [6.3K] [683a818b-21c6-4ab9-99f7-2b8fc2fe7a6c] /
Dec  7 14:54:02 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 320 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 48/228 objects misplaced (21.053%); 211 B/s, 1 keys/s, 5 objects/s recovering
Dec  7 14:54:02 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:02 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba40016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:02 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:02 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:02 np0005549633 python3[96892]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  7 14:54:02 np0005549633 ceph-mgr[74680]: [dashboard INFO request] [192.168.122.100:45542] [GET] [200] [0.002s] [6.3K] [8b1b5a2a-2100-4711-80df-e3260bb2e84f] /
Dec  7 14:54:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  7 14:54:02 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:02 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:03 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:03 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:54:03 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:03 np0005549633 ceph-mon[74384]: Deploying daemon keepalived.nfs.cephfs.compute-0.hbjfrz on compute-0
Dec  7 14:54:04 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v87: 337 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 5 active+remapped, 1 peering, 320 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 48/228 objects misplaced (21.053%); 141 B/s, 1 keys/s, 4 objects/s recovering
Dec  7 14:54:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:04 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:04 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:04 np0005549633 podman[96854]: 2025-12-07 19:54:04.716453789 +0000 UTC m=+2.926989685 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 14:54:04 np0005549633 podman[96854]: 2025-12-07 19:54:04.738730279 +0000 UTC m=+2.949266155 container create 8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_khorana, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vendor=Red Hat, Inc., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, release=1793)
Dec  7 14:54:04 np0005549633 systemd[1]: Started libpod-conmon-8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8.scope.
Dec  7 14:54:04 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:54:04 np0005549633 podman[96854]: 2025-12-07 19:54:04.836704089 +0000 UTC m=+3.047239975 container init 8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_khorana, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, version=2.2.4, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec  7 14:54:04 np0005549633 podman[96854]: 2025-12-07 19:54:04.848712942 +0000 UTC m=+3.059248838 container start 8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_khorana, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, version=2.2.4, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  7 14:54:04 np0005549633 podman[96854]: 2025-12-07 19:54:04.853709573 +0000 UTC m=+3.064245459 container attach 8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_khorana, version=2.2.4, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64)
Dec  7 14:54:04 np0005549633 flamboyant_khorana[96974]: 0 0
Dec  7 14:54:04 np0005549633 systemd[1]: libpod-8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8.scope: Deactivated successfully.
Dec  7 14:54:04 np0005549633 podman[96854]: 2025-12-07 19:54:04.858016345 +0000 UTC m=+3.068552241 container died 8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_khorana, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, architecture=x86_64, version=2.2.4, vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.buildah.version=1.28.2)
Dec  7 14:54:04 np0005549633 systemd[1]: var-lib-containers-storage-overlay-6bd588509149351b9a0efdacc2cb7adbe5b2aa1ba086dec611a8cd409aa42585-merged.mount: Deactivated successfully.
Dec  7 14:54:04 np0005549633 podman[96854]: 2025-12-07 19:54:04.912995485 +0000 UTC m=+3.123531371 container remove 8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8 (image=quay.io/ceph/keepalived:2.2.4, name=flamboyant_khorana, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.buildah.version=1.28.2, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.component=keepalived-container, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, version=2.2.4)
Dec  7 14:54:04 np0005549633 systemd[1]: libpod-conmon-8cf0aca943fd897163b65c5cb945beba9eae87c2d5066bf6a70bfe1cd074d3a8.scope: Deactivated successfully.
Dec  7 14:54:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:04 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:05 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:05 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:05 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  7 14:54:05 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  7 14:54:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 73 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=73 pruub=9.501852036s) [2] async=[2] r=-1 lpr=73 pi=[61,73)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.753158569s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 73 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=73 pruub=9.501797676s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.753158569s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 73 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=73 pruub=9.501734734s) [2] async=[2] r=-1 lpr=73 pi=[61,73)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.753265381s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 73 pg[10.1( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=73 pruub=9.501668930s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.753265381s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:05 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:05 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:05 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:05 np0005549633 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.hbjfrz for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:54:06 np0005549633 podman[97116]: 2025-12-07 19:54:06.081507609 +0000 UTC m=+0.087335234 container create 30ae4ca91a73d965fccb2f94792c266d32f15db8797f181ddc3da3d62362665f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v89: 337 pgs: 1 active+recovering+remapped, 10 active+recovery_wait+remapped, 3 peering, 323 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 48/229 objects misplaced (20.961%); 36 B/s, 1 objects/s recovering
Dec  7 14:54:06 np0005549633 podman[97116]: 2025-12-07 19:54:06.042913025 +0000 UTC m=+0.048740740 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 14:54:06 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e345832b427493e083457d20d52abd644cd9cae6f0aa5d53f0c01050266be43/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:06 np0005549633 podman[97116]: 2025-12-07 19:54:06.166744258 +0000 UTC m=+0.172571953 container init 30ae4ca91a73d965fccb2f94792c266d32f15db8797f181ddc3da3d62362665f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=2.2.4)
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  7 14:54:06 np0005549633 podman[97116]: 2025-12-07 19:54:06.179404637 +0000 UTC m=+0.185232312 container start 30ae4ca91a73d965fccb2f94792c266d32f15db8797f181ddc3da3d62362665f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz, io.buildah.version=1.28.2, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., name=keepalived, distribution-scope=public)
Dec  7 14:54:06 np0005549633 bash[97116]: 30ae4ca91a73d965fccb2f94792c266d32f15db8797f181ddc3da3d62362665f
Dec  7 14:54:06 np0005549633 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.hbjfrz for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: Starting VRRP child process, pid=4
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: Startup complete
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: (VI_0) Entering BACKUP STATE (init)
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:06 2025: VRRP_Script(check_backend) succeeded
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:06 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:06 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.282387733s) [2] async=[2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.752960205s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.b( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.282303810s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.752960205s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.281704903s) [2] async=[2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.753555298s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.281309128s) [2] async=[2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.753173828s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.11( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.281519890s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.753555298s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.281376839s) [2] async=[2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 237.753433228s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.1b( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.281338692s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.753433228s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:06 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 74 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=74 pruub=8.281222343s) [2] r=-1 lpr=74 pi=[61,74)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.753173828s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:54:06 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.qznwzf on compute-2
Dec  7 14:54:06 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.qznwzf on compute-2
Dec  7 14:54:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:06 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=4 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.083176613s) [0] async=[0] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 245.753829956s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=4 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.083052635s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.753829956s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.056095123s) [2] async=[2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 245.727066040s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.055964470s) [2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.727066040s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.081011772s) [2] async=[2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 245.753570557s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.081158638s) [2] async=[2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 245.753799438s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.080962181s) [2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.753570557s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.13( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=5 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.081048965s) [2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.753799438s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.081324577s) [2] async=[2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 245.753829956s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:07 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 75 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=66/67 n=6 ec=61/47 lis/c=66/61 les/c/f=67/62/0 sis=75 pruub=15.080771446s) [2] r=-1 lpr=75 pi=[61,75)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.753829956s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: Deploying daemon keepalived.nfs.cephfs.compute-2.qznwzf on compute-2
Dec  7 14:54:07 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:08 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v92: 337 pgs: 2 active+recovering+remapped, 5 active+recovery_wait+remapped, 4 active+remapped, 3 peering, 323 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 28/230 objects misplaced (12.174%); 84 B/s, 1 objects/s recovering
Dec  7 14:54:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:08 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:08 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:08 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 14:54:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  7 14:54:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:08 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:08 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  7 14:54:09 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  7 14:54:09 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.14 deep-scrub starts
Dec  7 14:54:09 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.14 deep-scrub ok
Dec  7 14:54:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:09 2025: (VI_0) Entering MASTER STATE
Dec  7 14:54:10 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v94: 337 pgs: 1 active+recovery_wait+remapped, 4 peering, 332 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4/230 objects misplaced (1.739%); 208 B/s, 8 objects/s recovering
Dec  7 14:54:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:10 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:10 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:10 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec  7 14:54:10 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec  7 14:54:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:10 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:11 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 14:54:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:11 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 14:54:11 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec  7 14:54:11 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec  7 14:54:12 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v95: 337 pgs: 4 peering, 333 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 190 B/s, 7 objects/s recovering
Dec  7 14:54:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:12 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:12 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8003820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:12 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  7 14:54:12 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  7 14:54:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:12 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 14:54:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:12 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:13 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  7 14:54:13 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  7 14:54:14 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 4 peering, 333 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 62 B/s, 3 objects/s recovering
Dec  7 14:54:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:14 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:14 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:14 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.d deep-scrub starts
Dec  7 14:54:14 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.d deep-scrub ok
Dec  7 14:54:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:14 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:15 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Dec  7 14:54:15 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Dec  7 14:54:15 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:15 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 14:54:16 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v97: 337 pgs: 337 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 4 objects/s recovering
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  7 14:54:16 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  7 14:54:16 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:16 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.118417740s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 249.452682495s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.118350029s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.452682495s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.117900848s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 249.452331543s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.117878914s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.452331543s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.4( v 71'1170 (0'0,71'1170] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.117958069s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=71'1170 lcod 71'1169 mlcod 71'1169 active pruub 249.452651978s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.4( v 71'1170 (0'0,71'1170] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.117893219s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=71'1170 lcod 71'1169 mlcod 0'0 unknown NOTIFY pruub 249.452651978s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.117315292s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 249.452301025s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 77 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=77 pruub=10.117292404s) [2] r=-1 lpr=77 pi=[61,77)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 249.452301025s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:16 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:16 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Dec  7 14:54:16 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Dec  7 14:54:16 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:16 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.4( v 71'1170 (0'0,71'1170] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=71'1170 lcod 71'1169 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.4( v 71'1170 (0'0,71'1170] local-lis/les=61/62 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=71'1170 lcod 71'1169 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:17 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 78 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:54:17 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:18 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 337 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 895 B/s rd, 383 B/s wr, 0 op/s; 82 B/s, 2 objects/s recovering
Dec  7 14:54:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  7 14:54:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  7 14:54:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:18 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:18 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:18 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:18 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:18 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:54:18 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Dec  7 14:54:18 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Dec  7 14:54:18 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  7 14:54:18 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:18 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:19 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event ced89aca-a8cf-486a-ac61-972ecbcb6db3 (Global Recovery Event) in 30 seconds
Dec  7 14:54:19 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.f scrub starts
Dec  7 14:54:19 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.f scrub ok
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev ec1017a4-8f88-4277-86e0-85a2f5cf5925 (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event ec1017a4-8f88-4277-86e0-85a2f5cf5925 (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 66 seconds
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:54:20
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] do_upmap
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.nfs', 'volumes', 'default.rgw.log', '.rgw.root', 'backups', 'images']
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] prepared 2/10 upmap changes
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] Executing plan auto_2025-12-07_19:54:20
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] ceph osd pg-upmap-items 10.17 mappings [{'from': 2, 'to': 1}]
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [balancer INFO root] ceph osd pg-upmap-items 10.1c mappings [{'from': 2, 'to': 1}]
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.17", "id": [2, 1]} v 0)
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.17", "id": [2, 1]}]: dispatch
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.1c", "id": [2, 1]} v 0)
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.1c", "id": [2, 1]}]: dispatch
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v101: 337 pgs: 337 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 68 B/s, 2 objects/s recovering
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  7 14:54:20 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:54:20 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:20 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:54:20 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:54:20 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:20 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:20 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Dec  7 14:54:20 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Dec  7 14:54:20 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:20 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:21 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:21 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 79 pg[6.d( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=64/64 les/c/f=66/67/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 79 pg[6.5( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=64/64 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 79 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 79 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 79 pg[10.4( v 71'1170 (0'0,71'1170] local-lis/les=78/79 n=6 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[61,78)/1 crt=71'1170 lcod 71'1169 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:21 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 79 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=5 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:21 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:21 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 704ee771-b5c3-428d-8800-2173ec489ad6 (Updating alertmanager deployment (+1 -> 1))
Dec  7 14:54:21 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec  7 14:54:21 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec  7 14:54:22 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 4 remapped+peering, 333 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:22 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:22 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4002d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.17", "id": [2, 1]}]': finished
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.1c", "id": [2, 1]}]': finished
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e80 crush map has features 3314933000854323200, adjusting msgr requires
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e80 crush map has features 432629239337189376, adjusting msgr requires
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e80 crush map has features 432629239337189376, adjusting msgr requires
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e80 crush map has features 432629239337189376, adjusting msgr requires
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 80 crush map has features 432629239337189376, adjusting msgr requires for clients
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 80 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 80 crush map has features 3314933000854323200, adjusting msgr requires for osds
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=5 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[61,78)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=6 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=80 pruub=15.139488220s) [2] async=[2] r=-1 lpr=80 pi=[61,80)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 261.011077881s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[10.c( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=6 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=80 pruub=15.139385223s) [2] r=-1 lpr=80 pi=[61,80)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.011077881s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[10.4( v 79'1173 (0'0,79'1173] local-lis/les=78/79 n=6 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[61,78)/1 crt=71'1170 lcod 79'1172 mlcod 53'844 active+recovering+remapped rops=1 mbc={255={(0+1)=5}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=5 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=80 pruub=15.138625145s) [1] async=[2] r=0 lpr=80 pi=[61,80)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 261.010986328s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 2 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=5 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=80 pruub=15.138625145s) [1] r=0 lpr=80 pi=[61,80)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown pruub 261.010986328s@ mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=80) [1] r=0 lpr=80 pi=[75,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[6.d( v 53'39 lc 53'13 (0'0,53'39] local-lis/les=79/80 n=2 ec=56/22 lis/c=64/64 les/c/f=66/67/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=53'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:22 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 80 pg[6.5( v 53'39 lc 53'11 (0'0,53'39] local-lis/les=79/80 n=2 ec=56/22 lis/c=64/64 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=53'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.17", "id": [2, 1]}]: dispatch
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.1c", "id": [2, 1]}]: dispatch
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:22 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:22 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:22 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:23 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  7 14:54:23 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  7 14:54:23 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 81 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=5 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=81 pruub=14.848990440s) [2] async=[2] r=-1 lpr=81 pi=[61,81)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 261.011291504s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:23 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 81 pg[10.14( v 53'1163 (0'0,53'1163] local-lis/les=78/79 n=5 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=81 pruub=14.848901749s) [2] r=-1 lpr=81 pi=[61,81)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 261.011291504s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:23 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 81 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[75,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:23 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 81 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=81) [1]/[2] r=-1 lpr=81 pi=[75,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:23 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 81 pg[10.1c( v 53'1163 (0'0,53'1163] local-lis/les=80/81 n=5 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=80) [1] r=0 lpr=80 pi=[61,80)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:23 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec  7 14:54:23 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec  7 14:54:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  7 14:54:24 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 4 remapped+peering, 333 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:24 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:24 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac003ca0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.271975456 +0000 UTC m=+1.610537181 volume create 2491cac13f6e6a59f485de777b5bdcf5239ff73df2d523a475d3bf215ece5271
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.2886593 +0000 UTC m=+1.627221025 container create bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zealous_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.254106511 +0000 UTC m=+1.592668226 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 14:54:24 np0005549633 systemd[90343]: Starting Mark boot as successful...
Dec  7 14:54:24 np0005549633 systemd[90343]: Finished Mark boot as successful.
Dec  7 14:54:24 np0005549633 ceph-mon[74384]: Deploying daemon alertmanager.compute-0 on compute-0
Dec  7 14:54:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.17", "id": [2, 1]}]': finished
Dec  7 14:54:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "10.1c", "id": [2, 1]}]': finished
Dec  7 14:54:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:24 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  7 14:54:24 np0005549633 systemd[1]: Started libpod-conmon-bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8.scope.
Dec  7 14:54:24 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:24 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:24 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:54:24 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63dac76198a7786fb9ca616bcd5335a4f53829a90dc713315ba33a6224c3d6fe/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.380685155 +0000 UTC m=+1.719246930 container init bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zealous_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.393497369 +0000 UTC m=+1.732059074 container start bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zealous_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.397485963 +0000 UTC m=+1.736047678 container attach bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zealous_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:24 np0005549633 zealous_payne[97380]: 65534 65534
Dec  7 14:54:24 np0005549633 systemd[1]: libpod-bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8.scope: Deactivated successfully.
Dec  7 14:54:24 np0005549633 conmon[97380]: conmon bc025c42c346597dc0c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8.scope/container/memory.events
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.399841024 +0000 UTC m=+1.738402709 container died bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zealous_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:24 np0005549633 systemd[1]: var-lib-containers-storage-overlay-63dac76198a7786fb9ca616bcd5335a4f53829a90dc713315ba33a6224c3d6fe-merged.mount: Deactivated successfully.
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.461263052 +0000 UTC m=+1.799824767 container remove bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=zealous_payne, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:24 np0005549633 podman[97239]: 2025-12-07 19:54:24.465732659 +0000 UTC m=+1.804294374 volume remove 2491cac13f6e6a59f485de777b5bdcf5239ff73df2d523a475d3bf215ece5271
Dec  7 14:54:24 np0005549633 systemd[1]: libpod-conmon-bc025c42c346597dc0c854723ea87f27d8c1bb15839f06d7a76b72064e5d11f8.scope: Deactivated successfully.
Dec  7 14:54:24 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 26 completed events
Dec  7 14:54:24 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:54:24 np0005549633 podman[97396]: 2025-12-07 19:54:24.550155217 +0000 UTC m=+0.048241227 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 14:54:24 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec  7 14:54:24 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec  7 14:54:24 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:24 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc001fd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:25 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.02589443 +0000 UTC m=+0.523980400 volume create ad7e2d1b66f5c3145c139aa374146a6c1442dde6c4c2a53fac3d5c42fb0ac95c
Dec  7 14:54:25 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.039361839 +0000 UTC m=+0.537447769 container create fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37 (image=quay.io/prometheus/alertmanager:v0.25.0, name=xenodochial_galois, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 82 pg[10.4( v 79'1173 (0'0,79'1173] local-lis/les=78/79 n=6 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=82 pruub=12.896745682s) [2] async=[2] r=-1 lpr=82 pi=[61,82)/1 crt=71'1170 lcod 79'1172 mlcod 79'1172 active pruub 261.011199951s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:25 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 82 pg[10.4( v 79'1173 (0'0,79'1173] local-lis/les=78/79 n=6 ec=61/47 lis/c=78/61 les/c/f=79/62/0 sis=82 pruub=12.896586418s) [2] r=-1 lpr=82 pi=[61,82)/1 crt=71'1170 lcod 79'1172 mlcod 0'0 unknown NOTIFY pruub 261.011199951s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:25 np0005549633 systemd[1]: Started libpod-conmon-fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37.scope.
Dec  7 14:54:25 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:54:25 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844a3bab497e6610f8467614440cb3b1d38ac571798e3564e731fc8539d069e6/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.144387973 +0000 UTC m=+0.642473923 container init fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37 (image=quay.io/prometheus/alertmanager:v0.25.0, name=xenodochial_galois, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.15694429 +0000 UTC m=+0.655030260 container start fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37 (image=quay.io/prometheus/alertmanager:v0.25.0, name=xenodochial_galois, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:25 np0005549633 xenodochial_galois[97412]: 65534 65534
Dec  7 14:54:25 np0005549633 systemd[1]: libpod-fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37.scope: Deactivated successfully.
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.163238504 +0000 UTC m=+0.661324484 container attach fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37 (image=quay.io/prometheus/alertmanager:v0.25.0, name=xenodochial_galois, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.163754947 +0000 UTC m=+0.661840887 container died fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37 (image=quay.io/prometheus/alertmanager:v0.25.0, name=xenodochial_galois, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:25 np0005549633 systemd[1]: var-lib-containers-storage-overlay-844a3bab497e6610f8467614440cb3b1d38ac571798e3564e731fc8539d069e6-merged.mount: Deactivated successfully.
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.223668457 +0000 UTC m=+0.721754427 container remove fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37 (image=quay.io/prometheus/alertmanager:v0.25.0, name=xenodochial_galois, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:25 np0005549633 podman[97396]: 2025-12-07 19:54:25.229873928 +0000 UTC m=+0.727959948 volume remove ad7e2d1b66f5c3145c139aa374146a6c1442dde6c4c2a53fac3d5c42fb0ac95c
Dec  7 14:54:25 np0005549633 systemd[1]: libpod-conmon-fb13a3b20a6d8dffdccaf6cb1099dfbc54b44bc7d99872cf0a3f6343af8e5b37.scope: Deactivated successfully.
Dec  7 14:54:25 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:25 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:25 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:25 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [WARNING] 340/195425 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 14:54:25 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:25 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec  7 14:54:25 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:25 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec  7 14:54:25 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:25 np0005549633 systemd[1]: Starting Ceph alertmanager.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:54:26 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v108: 337 pgs: 2 peering, 7 remapped+peering, 328 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 14:54:26 np0005549633 podman[97554]: 2025-12-07 19:54:26.228126361 +0000 UTC m=+0.112253223 volume create 78c77aa4f0763333ab93b36e814e2f7eda417586c8353a3c7a1cfaf53cec0cdc
Dec  7 14:54:26 np0005549633 podman[97554]: 2025-12-07 19:54:26.141698212 +0000 UTC m=+0.025825094 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  7 14:54:26 np0005549633 podman[97554]: 2025-12-07 19:54:26.247523136 +0000 UTC m=+0.131650008 container create 059920cb70c3ac6aee5ee5e91305a5fb20bf28017b973debfb12e3e6ef0a9120 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:26 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:26 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c000d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:26 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605f5bbab9537ca32ac483de82294915c5135732628577eb9428025b3f493f85/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:26 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/605f5bbab9537ca32ac483de82294915c5135732628577eb9428025b3f493f85/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:26 np0005549633 podman[97554]: 2025-12-07 19:54:26.540200064 +0000 UTC m=+0.424327046 container init 059920cb70c3ac6aee5ee5e91305a5fb20bf28017b973debfb12e3e6ef0a9120 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:26 np0005549633 podman[97554]: 2025-12-07 19:54:26.550177694 +0000 UTC m=+0.434304596 container start 059920cb70c3ac6aee5ee5e91305a5fb20bf28017b973debfb12e3e6ef0a9120 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:54:26 np0005549633 bash[97554]: 059920cb70c3ac6aee5ee5e91305a5fb20bf28017b973debfb12e3e6ef0a9120
Dec  7 14:54:26 np0005549633 systemd[1]: Started Ceph alertmanager.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.591Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.591Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.606Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.609Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  7 14:54:26 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.678Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.679Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  7 14:54:26 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.686Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:26.686Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  7 14:54:26 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec  7 14:54:26 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:26 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:27 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:27 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:27 np0005549633 ceph-mgr[74680]: [progress WARNING root] Starting Global Recovery Event,9 pgs not in active + clean state
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:28 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 704ee771-b5c3-428d-8800-2173ec489ad6 (Updating alertmanager deployment (+1 -> 1))
Dec  7 14:54:28 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 704ee771-b5c3-428d-8800-2173ec489ad6 (Updating alertmanager deployment (+1 -> 1)) in 6 seconds
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  7 14:54:28 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v109: 337 pgs: 2 peering, 6 remapped+peering, 329 active+clean; 456 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 26 B/s, 2 objects/s recovering
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:28 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev 465ee16e-c688-4dfe-ae71-ae3fa6aa1ac6 (Updating grafana deployment (+1 -> 1))
Dec  7 14:54:28 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:28 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:28 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec  7 14:54:28 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec  7 14:54:28 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:28 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec  7 14:54:28 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:28.610Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000239423s
Dec  7 14:54:28 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Dec  7 14:54:28 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Dec  7 14:54:28 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:28 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:28 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.416446897180439e-06 of space, bias 1.0, pg target 0.0007249340691541316 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 2.5436283128215145e-07 of space, bias 4.0, pg target 0.00030523539753858175 quantized to 32 (current 32)
Dec  7 14:54:29 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Dec  7 14:54:29 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Dec  7 14:54:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec  7 14:54:29 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  7 14:54:29 np0005549633 ceph-mgr[74680]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  7 14:54:29 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec  7 14:54:30 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v110: 337 pgs: 5 active+remapped, 1 peering, 331 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 121 B/s, 4 objects/s recovering
Dec  7 14:54:30 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:30 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:30 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:30 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:30 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Dec  7 14:54:30 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: Regenerating cephadm self-signed grafana TLS certificates
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 83 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=81/75 les/c/f=82/76/0 sis=83) [1] r=0 lpr=83 pi=[75,83)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:30 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 83 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=81/75 les/c/f=82/76/0 sis=83) [1] r=0 lpr=83 pi=[75,83)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:30 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  7 14:54:30 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:30 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:31 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:31 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec  7 14:54:31 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec  7 14:54:31 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  7 14:54:31 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  7 14:54:31 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  7 14:54:32 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v112: 337 pgs: 6 peering, 331 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 97 B/s, 3 objects/s recovering
Dec  7 14:54:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:32 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:32 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:32 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  7 14:54:32 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  7 14:54:32 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:32 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:32 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 27 completed events
Dec  7 14:54:32 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:54:33 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:33 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  7 14:54:33 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:33 np0005549633 ceph-mon[74384]: Deploying daemon grafana.compute-0 on compute-0
Dec  7 14:54:33 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  7 14:54:33 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  7 14:54:34 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v113: 337 pgs: 6 peering, 331 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 97 B/s, 2 objects/s recovering
Dec  7 14:54:34 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:34 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:34 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:34 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:34 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  7 14:54:34 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  7 14:54:34 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:34 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:35 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec  7 14:54:35 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec  7 14:54:35 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  7 14:54:35 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:35 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  7 14:54:35 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 84 pg[10.17( v 53'1163 (0'0,53'1163] local-lis/les=83/84 n=5 ec=61/47 lis/c=81/75 les/c/f=82/76/0 sis=83) [1] r=0 lpr=83 pi=[75,83)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:36 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v115: 337 pgs: 2 active+clean+scrubbing, 5 peering, 330 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail; 77 B/s, 1 objects/s recovering
Dec  7 14:54:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:36 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:36 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-alertmanager-compute-0[97571]: ts=2025-12-07T19:54:36.611Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001938504s
Dec  7 14:54:36 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  7 14:54:36 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  7 14:54:36 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:36 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:37 np0005549633 podman[97687]: 2025-12-07 19:54:37.06397047 +0000 UTC m=+5.397280783 container create dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce (image=quay.io/ceph/grafana:10.4.0, name=gifted_brahmagupta, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 podman[97687]: 2025-12-07 19:54:37.038014335 +0000 UTC m=+5.371324688 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 14:54:37 np0005549633 systemd[1]: Started libpod-conmon-dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce.scope.
Dec  7 14:54:37 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:54:37 np0005549633 podman[97687]: 2025-12-07 19:54:37.168649515 +0000 UTC m=+5.501959878 container init dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce (image=quay.io/ceph/grafana:10.4.0, name=gifted_brahmagupta, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 podman[97687]: 2025-12-07 19:54:37.181267443 +0000 UTC m=+5.514577776 container start dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce (image=quay.io/ceph/grafana:10.4.0, name=gifted_brahmagupta, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 podman[97687]: 2025-12-07 19:54:37.185678948 +0000 UTC m=+5.518989291 container attach dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce (image=quay.io/ceph/grafana:10.4.0, name=gifted_brahmagupta, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 gifted_brahmagupta[97901]: 472 0
Dec  7 14:54:37 np0005549633 systemd[1]: libpod-dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce.scope: Deactivated successfully.
Dec  7 14:54:37 np0005549633 podman[97687]: 2025-12-07 19:54:37.189490817 +0000 UTC m=+5.522801150 container died dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce (image=quay.io/ceph/grafana:10.4.0, name=gifted_brahmagupta, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 systemd[1]: var-lib-containers-storage-overlay-3318f076a59de496d276ef7a4f5ec99cbf28c4d3d17a78a44e778fd0b6498d97-merged.mount: Deactivated successfully.
Dec  7 14:54:37 np0005549633 podman[97687]: 2025-12-07 19:54:37.339503192 +0000 UTC m=+5.672813535 container remove dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce (image=quay.io/ceph/grafana:10.4.0, name=gifted_brahmagupta, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 systemd[1]: libpod-conmon-dcbae4a940845f7485d992e705e2552f5a3709813d0653445a2e8e94ffb618ce.scope: Deactivated successfully.
Dec  7 14:54:37 np0005549633 podman[97920]: 2025-12-07 19:54:37.486258442 +0000 UTC m=+0.112741295 container create 5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba (image=quay.io/ceph/grafana:10.4.0, name=recursing_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 podman[97920]: 2025-12-07 19:54:37.410136851 +0000 UTC m=+0.036619744 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 14:54:37 np0005549633 systemd[1]: Started libpod-conmon-5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba.scope.
Dec  7 14:54:37 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:54:37 np0005549633 podman[97920]: 2025-12-07 19:54:37.575183367 +0000 UTC m=+0.201666240 container init 5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba (image=quay.io/ceph/grafana:10.4.0, name=recursing_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 podman[97920]: 2025-12-07 19:54:37.585585867 +0000 UTC m=+0.212068680 container start 5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba (image=quay.io/ceph/grafana:10.4.0, name=recursing_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 recursing_fermi[97937]: 472 0
Dec  7 14:54:37 np0005549633 systemd[1]: libpod-5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba.scope: Deactivated successfully.
Dec  7 14:54:37 np0005549633 podman[97920]: 2025-12-07 19:54:37.591198853 +0000 UTC m=+0.217681696 container attach 5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba (image=quay.io/ceph/grafana:10.4.0, name=recursing_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 podman[97920]: 2025-12-07 19:54:37.591999494 +0000 UTC m=+0.218482347 container died 5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba (image=quay.io/ceph/grafana:10.4.0, name=recursing_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 systemd[1]: var-lib-containers-storage-overlay-39434713071cfe9f3a6ba522f78a04f14044ef3619209857e8ba04b03e3fff53-merged.mount: Deactivated successfully.
Dec  7 14:54:37 np0005549633 podman[97920]: 2025-12-07 19:54:37.685543848 +0000 UTC m=+0.312026691 container remove 5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba (image=quay.io/ceph/grafana:10.4.0, name=recursing_fermi, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:37 np0005549633 systemd[1]: libpod-conmon-5590f831256d4b963984bf2fa30b6693cd11148f11c8092a5e0f8f9390e0f3ba.scope: Deactivated successfully.
Dec  7 14:54:37 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec  7 14:54:37 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec  7 14:54:37 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:37 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:38 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:38 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:38 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:38 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:38 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  7 14:54:38 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:38 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:38 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c0028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:38 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:38 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:38 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:38 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:38 np0005549633 systemd[1]: Starting Ceph grafana.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:54:38 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec  7 14:54:38 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec  7 14:54:38 np0005549633 podman[98081]: 2025-12-07 19:54:38.916568341 +0000 UTC m=+0.067077587 container create efb83cfca0df12cdc5fff390e5f762ac058e3b80e60bac0e501b74cf6fb2a4d4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:38 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:38 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:38 np0005549633 podman[98081]: 2025-12-07 19:54:38.893790977 +0000 UTC m=+0.044300243 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  7 14:54:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6822f1a17439fb944496b8d1c963cdf48832166441f05d4dd569f8b3941ea887/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6822f1a17439fb944496b8d1c963cdf48832166441f05d4dd569f8b3941ea887/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6822f1a17439fb944496b8d1c963cdf48832166441f05d4dd569f8b3941ea887/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6822f1a17439fb944496b8d1c963cdf48832166441f05d4dd569f8b3941ea887/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:39 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6822f1a17439fb944496b8d1c963cdf48832166441f05d4dd569f8b3941ea887/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:39 np0005549633 podman[98081]: 2025-12-07 19:54:39.037925569 +0000 UTC m=+0.188434845 container init efb83cfca0df12cdc5fff390e5f762ac058e3b80e60bac0e501b74cf6fb2a4d4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:39 np0005549633 podman[98081]: 2025-12-07 19:54:39.048665839 +0000 UTC m=+0.199175095 container start efb83cfca0df12cdc5fff390e5f762ac058e3b80e60bac0e501b74cf6fb2a4d4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  7 14:54:39 np0005549633 bash[98081]: efb83cfca0df12cdc5fff390e5f762ac058e3b80e60bac0e501b74cf6fb2a4d4
Dec  7 14:54:39 np0005549633 systemd[1]: Started Ceph grafana.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:54:39 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:54:39 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.30536465Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-07T19:54:39Z
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.30573132Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.30575003Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305756301Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305763071Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305768311Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305773871Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305779511Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305785541Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305790511Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305795502Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305800442Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305805542Z level=info msg=Target target=[all]
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305815262Z level=info msg="Path Home" path=/usr/share/grafana
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305820552Z level=info msg="Path Data" path=/var/lib/grafana
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305826902Z level=info msg="Path Logs" path=/var/log/grafana
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305833733Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305838743Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=settings t=2025-12-07T19:54:39.305844883Z level=info msg="App mode production"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=sqlstore t=2025-12-07T19:54:39.306250603Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=sqlstore t=2025-12-07T19:54:39.306295174Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.307213208Z level=info msg="Starting DB migrations"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.308775169Z level=info msg="Executing migration" id="create migration_log table"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.310430922Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.654693ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.314832106Z level=info msg="Executing migration" id="create user table"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.315470693Z level=info msg="Migration successfully executed" id="create user table" duration=639.277µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.317828394Z level=info msg="Executing migration" id="add unique index user.login"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.319120098Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.291304ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.321674094Z level=info msg="Executing migration" id="add unique index user.email"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.32301743Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.342615ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.325501814Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.32686962Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.367686ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.329970711Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.331314666Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.344695ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.333707728Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.338498282Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.784794ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.340803363Z level=info msg="Executing migration" id="create user table v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.341731106Z level=info msg="Migration successfully executed" id="create user table v2" duration=927.823µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.344302873Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.345114735Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=811.942µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.348229616Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.349017686Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=789.3µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.35108709Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.351515701Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=428.481µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.354323354Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.354941851Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=618.787µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.356950403Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.358103283Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.152511ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.359964581Z level=info msg="Executing migration" id="Update user table charset"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.359995862Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.461µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.361906212Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.363043582Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.14023ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.366366948Z level=info msg="Executing migration" id="Add missing user data"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.366609844Z level=info msg="Migration successfully executed" id="Add missing user data" duration=242.856µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.368828152Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.370014722Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.18644ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.3718262Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.372671702Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=841.811µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.374681014Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.375941177Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.258953ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.378276957Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.387348334Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.069567ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.389425118Z level=info msg="Executing migration" id="Add uid column to user"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.39067921Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.253832ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.392335893Z level=info msg="Executing migration" id="Update uid column values for users"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.392667033Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=330.9µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.394826978Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.395772563Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=945.435µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.39872503Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.399599393Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=873.773µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.403047152Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.403853754Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=791.601µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.405749323Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.406500773Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=750.94µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.409516571Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.410063165Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=546.264µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.412011085Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.412663683Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=651.938µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.414939522Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.414961733Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=22.511µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.418442943Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.419023048Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=579.795µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.420780764Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.421310868Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=530.094µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.424121621Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.424722587Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=601.456µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.428481164Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.42905066Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=569.536µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.444694467Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.448074984Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.380837ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.450060997Z level=info msg="Executing migration" id="create temp_user v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.450929179Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=868.502µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.452957342Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.453785233Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=829.331µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.456141234Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.457443808Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.307434ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.459486631Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.460309713Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=823.022µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.46247676Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.463429984Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=954.924µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.465726854Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.466165055Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=427.231µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.468033733Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.468771043Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=739.85µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.470705103Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.471146275Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=441.232µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.475044747Z level=info msg="Executing migration" id="create star table"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.475803586Z level=info msg="Migration successfully executed" id="create star table" duration=758.229µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.47786321Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.478719572Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=856.172µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.482913691Z level=info msg="Executing migration" id="create org table v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.483763364Z level=info msg="Migration successfully executed" id="create org table v1" duration=850.133µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.486016072Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.486807663Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=791.141µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.489064682Z level=info msg="Executing migration" id="create org_user table v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.489808911Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=743.179µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.492439479Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.494035311Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.602492ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.496345661Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.49708446Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=738.669µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.499848322Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.500684404Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=839.342µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.503356394Z level=info msg="Executing migration" id="Update org table charset"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.503429736Z level=info msg="Migration successfully executed" id="Update org table charset" duration=85.242µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.505978872Z level=info msg="Executing migration" id="Update org_user table charset"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.506007133Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=29.851µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.508269891Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.508490807Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=221.026µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.510875359Z level=info msg="Executing migration" id="create dashboard table"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.511801713Z level=info msg="Migration successfully executed" id="create dashboard table" duration=926.794µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.513916318Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.514647297Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=726.249µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.516441394Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.517178004Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=736.279µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.52089946Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.521525357Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=626.226µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.523928838Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.524606817Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=675.029µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.526834604Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.527465261Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=634.007µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.530430298Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.535103079Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.668781ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.538353335Z level=info msg="Executing migration" id="create dashboard v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.539091933Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=738.678µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.543161779Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.544158536Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=999.206µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.546993069Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.547698127Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=705.138µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.552439271Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.552813591Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=374.32µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.554814132Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.555774728Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=960.396µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.55777938Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.557833982Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=54.282µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.560224063Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.561682111Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.461338ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.564545196Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.565991263Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.446707ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.567879833Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.569328631Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.448648ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.571107257Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.571908438Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=800.601µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.576099837Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.577942215Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.848438ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.580001048Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.580989134Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=988.136µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.585039559Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.585932993Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=893.504µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.588427577Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.588459639Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.462µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.590753998Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.590786319Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=33.001µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.592606796Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.59466881Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.089515ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.596982211Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.59929562Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.314729ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.601606001Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.603626503Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.017722ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.606037606Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.608016347Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.978751ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.610852492Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.611084868Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=232.227µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.615084262Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.615896362Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=812.2µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.620541753Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.621394416Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=852.023µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.629960109Z level=info msg="Executing migration" id="Update dashboard title length"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.629999259Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=39.941µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.632204347Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.633077529Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=873.232µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.635992255Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.636787196Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=794.731µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.639750193Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.645042131Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.292388ms
Dec  7 14:54:39 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec  7 14:54:39 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.87012687Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.871700671Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.576511ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.961754605Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.963732747Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.981572ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.96654264Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.968211763Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.670633ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.971243012Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.971860687Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=626.996µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.974412054Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.975643466Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.230102ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.983315266Z level=info msg="Executing migration" id="Add check_sum column"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.986764366Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.448429ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.989167058Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.990487813Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.320805ms
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.993373158Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.993713186Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=340.878µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.996289714Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.996636692Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=347.408µs
Dec  7 14:54:39 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:39.999056086Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.00036967Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.313364ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.002757461Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.006633533Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.875531ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.008734928Z level=info msg="Executing migration" id="create data_source table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.010322529Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.587761ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.012723252Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.01419865Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.474608ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.017047114Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.018489182Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.441278ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.020673268Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.022077594Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.403856ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.024320363Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.025711129Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.390226ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.027815494Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.037141807Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=9.325663ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.039648552Z level=info msg="Executing migration" id="create data_source table v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.041216233Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.567141ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.043624786Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.045174266Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.54903ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.047227299Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.048712468Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.484479ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.05147727Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.052484186Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.006846ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.055259218Z level=info msg="Executing migration" id="Add column with_credentials"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.059000506Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.740878ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.061670715Z level=info msg="Executing migration" id="Add secure json data column"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.064103958Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.438493ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.066108311Z level=info msg="Executing migration" id="Update data_source table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.066145942Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=39.031µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.068499173Z level=info msg="Executing migration" id="Update initial version to 1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.068902284Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=403.332µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.071592824Z level=info msg="Executing migration" id="Add read_only data column"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.075361861Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.810208ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.078522524Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.078878713Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=356.499µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.081172602Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.081494011Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=321.659µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.083976475Z level=info msg="Executing migration" id="Add uid column"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.087772955Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.796109ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.090633489Z level=info msg="Executing migration" id="Update uid value"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.090950067Z level=info msg="Migration successfully executed" id="Update uid value" duration=316.858µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.093877653Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.094747406Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=869.533µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.096706158Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.098098733Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.387215ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.100396653Z level=info msg="Executing migration" id="create api_key table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.101193504Z level=info msg="Migration successfully executed" id="create api_key table" duration=797.351µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.104344746Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.104960112Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=615.196µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.107010045Z level=info msg="Executing migration" id="add index api_key.key"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.107612281Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=601.916µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.111497382Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.112132379Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=634.907µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.114745496Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.116146853Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.401247ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.118843283Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.119689155Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=846.622µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.121989546Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.122917319Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=927.613µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.125079716Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.131947365Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.868229ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.134398218Z level=info msg="Executing migration" id="create api_key table v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.135198839Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=800.381µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.139778788Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.141326578Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.54675ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.144950803Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec  7 14:54:40 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v117: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.14636542Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.417797ms
Dec  7 14:54:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:40 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:40 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:40 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.331658832Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.333920251Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=2.264329ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.336713454Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.337280179Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=571.994µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.340049111Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.341533479Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.484168ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.343943282Z level=info msg="Executing migration" id="Update api_key table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.343969953Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=27.771µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.345875172Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.347720311Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.844749ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.349296602Z level=info msg="Executing migration" id="Add service account foreign key"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.351122149Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.822887ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.353230694Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.353729997Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=499.293µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:40 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.356498519Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.360821861Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=4.322272ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.370916934Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.376768697Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=5.851803ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.382135346Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.384379835Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=2.243389ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.387871855Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.389243161Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.370676ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.392458505Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.394324254Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.864918ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.396940592Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.398246106Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.309494ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.400176256Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.400953436Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=777.31µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.403108552Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.404868188Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.763206ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.455704981Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.455867546Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=192.164µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.459198142Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.459241603Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=45.411µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.462147239Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.46680609Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.658671ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.469237814Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.473821383Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=4.583599ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.481983906Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.482107539Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=120.453µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.485022715Z level=info msg="Executing migration" id="create quota table v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.48638447Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.361145ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.4894501Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.491054541Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.602591ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.494172202Z level=info msg="Executing migration" id="Update quota table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.494221443Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=48.771µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.497289964Z level=info msg="Executing migration" id="create plugin_setting table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.498862935Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.570181ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.50173144Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.503257179Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.52542ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.50637179Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.511413032Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=5.044032ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.514455221Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.514501172Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=46.701µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.517309445Z level=info msg="Executing migration" id="create session table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.518930117Z level=info msg="Migration successfully executed" id="create session table" duration=1.620732ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.579318109Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.579620856Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=305.817µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.58319524Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.583366654Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=172.724µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.586630829Z level=info msg="Executing migration" id="create playlist table v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.588280213Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.648493ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.591437795Z level=info msg="Executing migration" id="create playlist item table v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.592841711Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.403507ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.597381399Z level=info msg="Executing migration" id="Update playlist table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.59742492Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=45.141µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.600048568Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.60008998Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=43.021µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.6028015Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.607863552Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=5.062212ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.611696421Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.616825855Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=5.129474ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.619214418Z level=info msg="Executing migration" id="drop preferences table v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.619364241Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=154.514µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.621740203Z level=info msg="Executing migration" id="drop preferences table v3"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.621892977Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=153.124µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.624095784Z level=info msg="Executing migration" id="create preferences table v3"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.6254641Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.367476ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.627880963Z level=info msg="Executing migration" id="Update preferences table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.627923634Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=43.321µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.630323517Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.635408059Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.083701ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.637811091Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.638067478Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=256.367µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.640376388Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.64545895Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=5.082162ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.647837432Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.652888434Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=5.050142ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.654935627Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.655037309Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=105.622µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.65735394Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.659087225Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.732065ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.661458427Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.6631109Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.648913ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.665967804Z level=info msg="Executing migration" id="create alert table v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.668253424Z level=info msg="Migration successfully executed" id="create alert table v1" duration=2.28509ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.670936674Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.673193822Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=2.255658ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.676050916Z level=info msg="Executing migration" id="add index alert state"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.677705079Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.654383ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.679984049Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.681505169Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.520689ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.683742987Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.684487866Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=744.499µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.686713274Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.68770015Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=987.086µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.689492707Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.690484153Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=988.916µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.692246018Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec  7 14:54:40 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 6f879372-0822-4ffd-893d-fcbb265f27db (Global Recovery Event) in 13 seconds
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.702409773Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.162325ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.70423193Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.705064462Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=832.722µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.706825297Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.707718821Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=893.704µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.709682162Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.710026801Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=344.529µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.711716735Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.712368502Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=651.467µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.714018615Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.714843626Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=825.652µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.717693401Z level=info msg="Executing migration" id="Add column is_default"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.721256164Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.561754ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.722892056Z level=info msg="Executing migration" id="Add column frequency"
Dec  7 14:54:40 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.727465005Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.572749ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.729735314Z level=info msg="Executing migration" id="Add column send_reminder"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.733853642Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.117968ms
Dec  7 14:54:40 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.735649988Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.73919519Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.545122ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.741063449Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.742013813Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=949.964µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.744056827Z level=info msg="Executing migration" id="Update alert table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.744088158Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=26.761µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.746738757Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.746766168Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=28.381µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.748716818Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.749518879Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=801.891µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.751145211Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.752052185Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=906.714µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.75611207Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.757149087Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.036967ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.758923583Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.759690454Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=766.691µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.761121491Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.761802109Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=680.178µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.763352029Z level=info msg="Executing migration" id="Add for to alert table"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.765930566Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.577887ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.767734793Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.771770578Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=4.035765ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.773806801Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.773999306Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=192.675µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.775890146Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.77684771Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=956.994µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.778708069Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.779672733Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=964.114µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.781645955Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.785466324Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.819609ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.787812936Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.787879297Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=66.761µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.789952922Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.790927536Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=974.844µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.793473803Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.794573392Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.099179ms
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.797793635Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.797895878Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=101.813µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.7995196Z level=info msg="Executing migration" id="create annotation table v5"
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:40.800501556Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=981.946µs
Dec  7 14:54:40 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:40 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c0028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.000492001Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.002199675Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.705194ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.00541148Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.007217746Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.809296ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.009688931Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.01117332Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.483648ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.013737195Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.015351958Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.611313ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.017788501Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.019403704Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.614343ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.021762775Z level=info msg="Executing migration" id="Update annotation table charset"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.021805646Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=43.711µs
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.024495756Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.03116713Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.670484ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.033705586Z level=info msg="Executing migration" id="Drop category_id index"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.035147243Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.441427ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.037534575Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.043649785Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.11431ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.045884892Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.047058333Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.173121ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.049477196Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.051033277Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.555001ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.053254905Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.054828776Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.573991ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.057109044Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.074221551Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=17.111616ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.076460488Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.077816554Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.354555ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.080017031Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.0815026Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.484909ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.083899003Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.084417706Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=518.834µs
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.086589322Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.08766958Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.079968ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.090509544Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.090823853Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=314.739µs
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.093216654Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.099411136Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.190982ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.1018916Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.108086392Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.194432ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.110271258Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.1118287Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.556451ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.114144759Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.115663479Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.5176ms
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.148961576Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.149430287Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=475.202µs
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.463576065Z level=info msg="Executing migration" id="Add epoch_end column"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.467218189Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.647824ms
Dec  7 14:54:41 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  7 14:54:41 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.858481523Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec  7 14:54:41 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:41.860334961Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.855428ms
Dec  7 14:54:42 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v118: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:42 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:42 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:42 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:42 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc002cf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:42 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec  7 14:54:42 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:42 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:44 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v119: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.160865Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.16240613Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=1.54486ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.165513301Z level=info msg="Executing migration" id="Move region to single row"
Dec  7 14:54:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.16622935Z level=info msg="Migration successfully executed" id="Move region to single row" duration=724.3µs
Dec  7 14:54:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:44 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  7 14:54:44 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:44 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.17120894Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec  7 14:54:44 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.173340175Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=2.133385ms
Dec  7 14:54:44 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:44 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c0028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:44 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.679949453Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.682506329Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=2.562737ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.764626005Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.766530145Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.90701ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.769109682Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.769993605Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=883.883µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.772834919Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.773763763Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=928.914µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.775712084Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.776624368Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=912.335µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.849854243Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.850782768Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=928.975µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.854461684Z level=info msg="Executing migration" id="create test_data table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.856779053Z level=info msg="Migration successfully executed" id="create test_data table" duration=2.316829ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.862496672Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.864743601Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=2.245849ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.867641566Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.869360901Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.722975ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.872156444Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.87395273Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.796286ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.877349589Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.877697418Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=347.959µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.880581503Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.8812155Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=634.757µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.883929771Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.884021293Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=92.162µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.886900258Z level=info msg="Executing migration" id="create team table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.888300094Z level=info msg="Migration successfully executed" id="create team table" duration=1.398116ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.890658595Z level=info msg="Executing migration" id="add index team.org_id"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.892312419Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.653494ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.894592798Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.896098907Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.505739ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.899012733Z level=info msg="Executing migration" id="Add column uid in team"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.909228099Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=10.207957ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.955428722Z level=info msg="Executing migration" id="Update uid column values in team"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.955846962Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=427.011µs
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.960734449Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.964110278Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=3.375608ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.96689226Z level=info msg="Executing migration" id="create team member table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.968265876Z level=info msg="Migration successfully executed" id="create team member table" duration=1.372736ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.97192243Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.973761539Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.836949ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.976381717Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:44 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.978393259Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.010122ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.980885094Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.982344422Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.459218ms
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.984794026Z level=info msg="Executing migration" id="Add column email to team table"
Dec  7 14:54:44 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:44.994162389Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=9.365313ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.034698384Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.043141764Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=8.4435ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.045605949Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.053742Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=8.136272ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.058397191Z level=info msg="Executing migration" id="create dashboard acl table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.060320852Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.921831ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.063856333Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.0656588Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.802517ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.075806595Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.077806617Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=2.004522ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.081872993Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.083909205Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=2.035323ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.090317052Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.092300844Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.984871ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.095165408Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.096880634Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.717495ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.099702517Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.10174976Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=2.046883ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.104974403Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.106824832Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.853149ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.109767359Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.110716154Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=948.765µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.117039108Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.117418217Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=380.069µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.120638782Z level=info msg="Executing migration" id="create tag table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.122168722Z level=info msg="Migration successfully executed" id="create tag table" duration=1.53052ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.12557813Z level=info msg="Executing migration" id="add index tag.key_value"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.12710965Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.562361ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.131062593Z level=info msg="Executing migration" id="create login attempt table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.132518121Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.452548ms
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.135378965Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.137003397Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.623902ms
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.144519383Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.146067634Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.54771ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.199446703Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.22200075Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=22.550077ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.228210791Z level=info msg="Executing migration" id="create login_attempt v2"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.234814573Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=6.607442ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.237804031Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.239371892Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.567811ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.241919968Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.242599286Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=678.718µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.245003308Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.246180479Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.177291ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.249067354Z level=info msg="Executing migration" id="create user auth table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.25045643Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.389156ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.253637724Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.255246805Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.608621ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.258474249Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.258628023Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=154.994µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.261482507Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.269991469Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.508972ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.272512335Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.280828881Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.314887ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.283439099Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.291414897Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=7.978338ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.294811645Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.303190373Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.379428ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.306618522Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.308270136Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.651284ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.312210677Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.323086141Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=10.868453ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.383452472Z level=info msg="Executing migration" id="create server_lock table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.385404103Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.954711ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.388316199Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.3894973Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.180781ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.392054357Z level=info msg="Executing migration" id="create user auth token table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.393022161Z level=info msg="Migration successfully executed" id="create user auth token table" duration=967.614µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.395320421Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.396331358Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.010767ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.408623097Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.409730596Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.107349ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.411755229Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.413068324Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.313054ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.415529597Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.422473788Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.940831ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.425460775Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.426528353Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.067578ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.429212463Z level=info msg="Executing migration" id="create cache_data table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.430278821Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.065388ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.449425189Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.450773154Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.349805ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.455043945Z level=info msg="Executing migration" id="create short_url table v1"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.456144294Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.099319ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.459870971Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.461921964Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=2.049873ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.465216611Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.465323423Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=108.322µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.470265582Z level=info msg="Executing migration" id="delete alert_definition table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.470434766Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=170.445µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.477141141Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.482679765Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=5.541005ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.490721464Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.492517051Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.794017ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.495153429Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.496450303Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.297574ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.498898197Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.498967269Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=71.782µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.501354922Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.502506151Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.15152ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.506537156Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.507746827Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.210391ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.511268839Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.512328006Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.059087ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.516231368Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.517360108Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.12877ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.523941689Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.530071338Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.128009ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.534954906Z level=info msg="Executing migration" id="drop alert_definition table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.536222968Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.269722ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.538484767Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.53860503Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=120.743µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.540668725Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.541729772Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.060666ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.544078713Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.545223363Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.14387ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.548089368Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.549276279Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.186512ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.55394302Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.554009301Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=64.711µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.556349803Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.557887223Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.5401ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.560734727Z level=info msg="Executing migration" id="create alert_instance table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.561769264Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.036117ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.565985034Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.567063772Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.078399ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.573773546Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.574858664Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.085478ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.577511623Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.583416507Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.900364ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.586706113Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.587843782Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.138689ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.59047218Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.591513748Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.042458ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.594789193Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.628024638Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=33.228585ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.630730018Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.657147676Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.417608ms
Dec  7 14:54:45 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 28 completed events
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.846262119Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.849313858Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=3.05421ms
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.852515131Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.854192295Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.676784ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.856937087Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[6.e( v 53'39 (0'0,53'39] local-lis/les=65/66 n=1 ec=56/22 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=15.669067383s) [0] r=-1 lpr=85 pi=[65,85)/1 crt=53'39 mlcod 53'39 active pruub 284.603149414s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[6.e( v 53'39 (0'0,53'39] local-lis/les=65/66 n=1 ec=56/22 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=15.668998718s) [0] r=-1 lpr=85 pi=[65,85)/1 crt=53'39 mlcod 0'0 unknown NOTIFY pruub 284.603149414s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[6.6( v 53'39 (0'0,53'39] local-lis/les=65/66 n=1 ec=56/22 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=15.668478966s) [0] r=-1 lpr=85 pi=[65,85)/1 crt=53'39 mlcod 53'39 active pruub 284.603027344s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[6.6( v 53'39 (0'0,53'39] local-lis/les=65/66 n=1 ec=56/22 lis/c=65/65 les/c/f=66/66/0 sis=85 pruub=15.668437958s) [0] r=-1 lpr=85 pi=[65,85)/1 crt=53'39 mlcod 0'0 unknown NOTIFY pruub 284.603027344s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.866443813Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=9.498736ms
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=85) [1] r=0 lpr=85 pi=[75,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=68/68 les/c/f=69/69/0 sis=85) [1] r=0 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=68/68 les/c/f=69/69/0 sis=85) [1] r=0 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:45 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 85 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=70/70 les/c/f=71/71/0 sis=85) [1] r=0 lpr=85 pi=[70,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.883451426Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.889059322Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.608236ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.891182718Z level=info msg="Executing migration" id="create alert_rule table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.892206234Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.027677ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.901766034Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.902857101Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.092058ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.90471417Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.905687115Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=973.205µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.919225617Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.920949292Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.725895ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.923261103Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.923330054Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=69.321µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.925938912Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.932425541Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.477228ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.935696856Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.941341213Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.644107ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.945888981Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.950136262Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.245601ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.952523584Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.953269813Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=750.469µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.955223565Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.956059986Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=835.801µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.957648787Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.961649991Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.001044ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.964142116Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.968755316Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=4.61253ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.970882622Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.971651672Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=768.53µs
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.974205418Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.978608243Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.402285ms
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.980858322Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec  7 14:54:45 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:45.98501551Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.156208ms
Dec  7 14:54:45 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.043295847Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.04340374Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=111.603µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.047509457Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.048697487Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.18829ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.050836413Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.051666915Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=829.982µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.054628832Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.055501204Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=871.452µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.057823905Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.057881606Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=58.081µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.062387374Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.067061246Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.674803ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.069198711Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.073464002Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.265851ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.075197207Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.079360345Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.162738ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.081231685Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.08565975Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.425605ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.087836006Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec  7 14:54:46 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.09412839Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.296124ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.096275945Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.096395989Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=120.044µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.100105215Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.101763418Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.656253ms
Dec  7 14:54:46 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.105807094Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.119224483Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=13.413319ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.12177626Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.121843352Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=70.752µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.124197782Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.130379424Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.180742ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.132304503Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.133383722Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.077789ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.135504046Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.140101317Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.596751ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.141951655Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.142572451Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=620.856µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.144712847Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.145901098Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.188342ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.148039343Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec  7 14:54:46 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v121: 337 pgs: 1 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 335 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.154408739Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.368616ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.157458668Z level=info msg="Executing migration" id="create provenance_type table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.158387683Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=929.055µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.249462353Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.251214909Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.754455ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:46 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.280691236Z level=info msg="Executing migration" id="create alert_image table"
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.28159842Z level=info msg="Migration successfully executed" id="create alert_image table" duration=912.214µs
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.284923826Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.285773908Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=850.402µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.29007176Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.290137672Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=64.422µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.291936758Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.292862633Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=924.905µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.294910776Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.296064656Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.15426ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.298382567Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.29889569Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.300963694Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.301498147Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=534.523µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.303339675Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.304480185Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.13977ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.306354374Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.314454354Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=8.09805ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.319110226Z level=info msg="Executing migration" id="create library_element table v1"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.320345828Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.235292ms
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:46 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev 465ee16e-c688-4dfe-ae71-ae3fa6aa1ac6 (Updating grafana deployment (+1 -> 1))
Dec  7 14:54:46 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 465ee16e-c688-4dfe-ae71-ae3fa6aa1ac6 (Updating grafana deployment (+1 -> 1)) in 18 seconds
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.322580256Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.323370037Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=789.351µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.325511972Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.326151999Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=639.857µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.327839933Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.329025514Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.185191ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.330921013Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.332081924Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.1576ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.335060081Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.335087312Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=27.961µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.33694615Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.337011761Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=66.211µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.339161588Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.339467005Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=305.487µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.341469878Z level=info msg="Executing migration" id="create data_keys table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.34270781Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.239321ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.344837695Z level=info msg="Executing migration" id="create secrets table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.345697588Z level=info msg="Migration successfully executed" id="create secrets table" duration=859.673µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.347450194Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:46 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c0028c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.387440864Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=39.99071ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.389270832Z level=info msg="Executing migration" id="add name column into data_keys"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.394079397Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.808305ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.395615987Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.39573284Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=118.523µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.397462766Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:46 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev d7abc218-59f3-4a9a-85e6-a313845637d1 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.422743994Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=25.277787ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.424810947Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.451591604Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=26.783157ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.453692039Z level=info msg="Executing migration" id="create kv_store table v1"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.45450751Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=815.211µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.456371729Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.45722568Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=853.851µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.459441699Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.459653474Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=211.755µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.461824021Z level=info msg="Executing migration" id="create permission table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.46258047Z level=info msg="Migration successfully executed" id="create permission table" duration=758.959µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.467602041Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.468326829Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=725.928µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.470218739Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.470987479Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=767.99µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.472935679Z level=info msg="Executing migration" id="create role table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.473634998Z level=info msg="Migration successfully executed" id="create role table" duration=700.219µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.475342142Z level=info msg="Executing migration" id="add column display_name"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.480829935Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.484753ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.482736074Z level=info msg="Executing migration" id="add column group_name"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.488521776Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.785682ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.490807555Z level=info msg="Executing migration" id="add index role.org_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.491576165Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=768.891µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.493191527Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.494108581Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=916.684µs
Dec  7 14:54:46 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.nywreh on compute-0
Dec  7 14:54:46 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.nywreh on compute-0
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.496223466Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.497066568Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=843.342µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.49906188Z level=info msg="Executing migration" id="create team role table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.499713157Z level=info msg="Migration successfully executed" id="create team role table" duration=650.998µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.50176226Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.502576101Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=813.061µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.592272486Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.59510093Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.829664ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.726525331Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.72884622Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.32429ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.732481445Z level=info msg="Executing migration" id="create user role table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.734291892Z level=info msg="Migration successfully executed" id="create user role table" duration=1.814137ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.737066114Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.738905112Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.843858ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.741397977Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.742325881Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=927.814µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.744529878Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.745441252Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=911.244µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.751217943Z level=info msg="Executing migration" id="create builtin role table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.752074225Z level=info msg="Migration successfully executed" id="create builtin role table" duration=855.172µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.753886362Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.754830207Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=943.175µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.756512991Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.757437025Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=924.024µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.759420656Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.766119201Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=6.698065ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.770171226Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.771698346Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.5321ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.774180501Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.775929026Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.748235ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.778462642Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.780204567Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.740905ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.783112313Z level=info msg="Executing migration" id="add unique index role.uid"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.785089384Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.982821ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.787085047Z level=info msg="Executing migration" id="create seed assignment table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.788022431Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=937.055µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.790121615Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.791301416Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.178391ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.793511623Z level=info msg="Executing migration" id="add column hidden to role table"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.801899682Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.388898ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.804191792Z level=info msg="Executing migration" id="permission kind migration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.812514308Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.319466ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.8145204Z level=info msg="Executing migration" id="permission attribute migration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.8225788Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.0541ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.825119816Z level=info msg="Executing migration" id="permission identifier migration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.833257068Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.134372ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.835798135Z level=info msg="Executing migration" id="add permission identifier index"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.837162Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.364605ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.83910354Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.84135953Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.247789ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.844035969Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.845439766Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.404497ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.848212578Z level=info msg="Executing migration" id="create query_history table v1"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.849350387Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.137729ms
Dec  7 14:54:46 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.853125215Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.854431589Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.306564ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.856663388Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.85673129Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=68.732µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.85868121Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.858726121Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=45.571µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.861581275Z level=info msg="Executing migration" id="teams permissions migration"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.862206102Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=625.227µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.864403259Z level=info msg="Executing migration" id="dashboard permissions"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.865176769Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=774.33µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.86712549Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.867911261Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=787.171µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.869872251Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.870117497Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=245.216µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.872147641Z level=info msg="Executing migration" id="alerting notification permissions"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.872832518Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=684.117µs
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.875891408Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.876940175Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.049897ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.879153203Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.880725744Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.570541ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.883681671Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:46.893330302Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=9.647311ms
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:46 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:46 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [WARNING] 340/195446 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.109263022Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.109463877Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=208.155µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.112899957Z level=info msg="Executing migration" id="create correlation table v1"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.11493495Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.035023ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.117511437Z level=info msg="Executing migration" id="add index correlations.uid"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.11877553Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.263963ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.123485032Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.125824543Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.339651ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.128258927Z level=info msg="Executing migration" id="add correlation config column"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.146289045Z level=info msg="Migration successfully executed" id="add correlation config column" duration=18.021078ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.148965295Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.151369098Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.398923ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.154653263Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.157187489Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.536456ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.17180769Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  7 14:54:47 np0005549633 podman[98212]: 2025-12-07 19:54:47.177388026 +0000 UTC m=+0.102967992 container create 26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4 (image=quay.io/ceph/haproxy:2.3, name=cranky_knuth)
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=68/68 les/c/f=69/69/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[68,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[75,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[75,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=68/68 les/c/f=69/69/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[68,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=68/68 les/c/f=69/69/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[68,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=68/68 les/c/f=69/69/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[68,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=70/70 les/c/f=71/71/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[70,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=70/70 les/c/f=71/71/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[70,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:47 np0005549633 podman[98212]: 2025-12-07 19:54:47.098849872 +0000 UTC m=+0.024429868 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.199728657Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=27.917667ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.202051047Z level=info msg="Executing migration" id="create correlation v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.203402722Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.349545ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.207831577Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.209531082Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.702205ms
Dec  7 14:54:47 np0005549633 systemd[1]: Started libpod-conmon-26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4.scope.
Dec  7 14:54:47 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.264089902Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.267815709Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=3.737857ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.270993331Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.272611384Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.617633ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.276226148Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.276569816Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=345.338µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.278438096Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.279461913Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.023467ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.282568743Z level=info msg="Executing migration" id="add provisioning column"
Dec  7 14:54:47 np0005549633 podman[98212]: 2025-12-07 19:54:47.290706685 +0000 UTC m=+0.216286681 container init 26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4 (image=quay.io/ceph/haproxy:2.3, name=cranky_knuth)
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.295768466Z level=info msg="Migration successfully executed" id="add provisioning column" duration=13.212473ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.298490737Z level=info msg="Executing migration" id="create entity_events table"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.300102549Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.612382ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.302493172Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec  7 14:54:47 np0005549633 podman[98212]: 2025-12-07 19:54:47.304173875 +0000 UTC m=+0.229753831 container start 26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4 (image=quay.io/ceph/haproxy:2.3, name=cranky_knuth)
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.30438295Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.889168ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.307342507Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.308152799Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  7 14:54:47 np0005549633 podman[98212]: 2025-12-07 19:54:47.308471527 +0000 UTC m=+0.234051493 container attach 26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4 (image=quay.io/ceph/haproxy:2.3, name=cranky_knuth)
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.311015333Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  7 14:54:47 np0005549633 cranky_knuth[98228]: 0 0
Dec  7 14:54:47 np0005549633 systemd[1]: libpod-26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4.scope: Deactivated successfully.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.311901397Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  7 14:54:47 np0005549633 podman[98212]: 2025-12-07 19:54:47.312845681 +0000 UTC m=+0.238425637 container died 26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4 (image=quay.io/ceph/haproxy:2.3, name=cranky_knuth)
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.314623518Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.316233219Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.686283ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.320116731Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.322054591Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.9369ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.324521375Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.326754043Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.231518ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.329451383Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.331670021Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=2.218088ms
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=73/73 les/c/f=74/74/0 sis=86) [1] r=0 lpr=86 pi=[73,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=86) [1] r=0 lpr=86 pi=[75,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  7 14:54:47 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 86 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=86) [1] r=0 lpr=86 pi=[75,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:47 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.338804957Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.341866556Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=3.061189ms
Dec  7 14:54:47 np0005549633 systemd[1]: var-lib-containers-storage-overlay-4925b4dbb4a506e23371877bbaeae9f64bb5acc171b5bc6f4a501e4f81e629ca-merged.mount: Deactivated successfully.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.350017779Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.352203425Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.198637ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.360926103Z level=info msg="Executing migration" id="Drop public config table"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.361772424Z level=info msg="Migration successfully executed" id="Drop public config table" duration=846.011µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.479520459Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.481305726Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.791107ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.483727678Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.4849443Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.213082ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.489413646Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.491345567Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.931381ms
Dec  7 14:54:47 np0005549633 podman[98212]: 2025-12-07 19:54:47.491585593 +0000 UTC m=+0.417165549 container remove 26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4 (image=quay.io/ceph/haproxy:2.3, name=cranky_knuth)
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.495829724Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.498545884Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.726041ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.503685219Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.529419698Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.707529ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.531621306Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.545187368Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=13.558972ms
Dec  7 14:54:47 np0005549633 systemd[1]: libpod-conmon-26807f2dab002241a520131b39799c8b20f00f40d1ecb561b05c9363ebe93aa4.scope: Deactivated successfully.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.547650362Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec  7 14:54:47 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.560864467Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=13.207794ms
Dec  7 14:54:47 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.640017967Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.640805858Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=784.81µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.643879627Z level=info msg="Executing migration" id="add share column"
Dec  7 14:54:47 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.655781567Z level=info msg="Migration successfully executed" id="add share column" duration=11.89752ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.658105148Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.658324793Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=220.666µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.660281025Z level=info msg="Executing migration" id="create file table"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.661328711Z level=info msg="Migration successfully executed" id="create file table" duration=1.045716ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.663472967Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.664672089Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.197892ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.666429604Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.667636986Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.209022ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.669545755Z level=info msg="Executing migration" id="create file_meta table"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.670616983Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.070508ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.672468601Z level=info msg="Executing migration" id="file table idx: path key"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.673879778Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.410497ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.677973794Z level=info msg="Executing migration" id="set path collation in file table"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.678042866Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=70.402µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.679970487Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.680044269Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=73.772µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.681894397Z level=info msg="Executing migration" id="managed permissions migration"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.68243783Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=543.023µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.684263769Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.684513155Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=249.396µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.687960394Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.689480894Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.52012ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.691682431Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.700526961Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.84376ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.702490333Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.702683428Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=195.405µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.704741831Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.706023415Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.281133ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.70812279Z level=info msg="Executing migration" id="update group index for alert rules"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.70851204Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=390.01µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.710396488Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.710637244Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=240.726µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.712459612Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.712934914Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=475.282µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.714851414Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.723280364Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=8.425489ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.786837278Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.803868621Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=17.032563ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.806630783Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.808862432Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=2.232649ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.811667905Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.909242744Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=97.571619ms
Dec  7 14:54:47 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.936108694Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.937466928Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.359324ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.941648847Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.942814218Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.169591ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.944831411Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.972273034Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=27.440513ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.974595565Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.986616308Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=12.068614ms
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.988871967Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.989120603Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=248.677µs
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.991001482Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec  7 14:54:47 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:47.991138616Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=137.624µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.01051443Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.010931701Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=420.471µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.014105123Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.014335539Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=230.906µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.017134712Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.017369709Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=235.367µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.019997026Z level=info msg="Executing migration" id="create folder table"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.021077305Z level=info msg="Migration successfully executed" id="create folder table" duration=1.080199ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.022794639Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.023773565Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=979.896µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.026394233Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.027260476Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=866.163µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.028790616Z level=info msg="Executing migration" id="Update folder title length"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.028813527Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.291µs
Dec  7 14:54:48 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.03276468Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.033656452Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=891.662µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.036018474Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec  7 14:54:48 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.0370107Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=992.106µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.042162134Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.048887299Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=6.724525ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.051378874Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.051829305Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=450.191µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.053626882Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.05391335Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=286.048µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.055701516Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.056627501Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=924.245µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.058210752Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.059114615Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=903.973µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.063456558Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.064370882Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=914.374µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.067661647Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.068588301Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=926.314µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.071321183Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.072186666Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=865.333µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.075633945Z level=info msg="Executing migration" id="create anon_device table"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.076380945Z level=info msg="Migration successfully executed" id="create anon_device table" duration=746.67µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.079118126Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.080102551Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=984.265µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.082590656Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.083462909Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=872.253µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.085725727Z level=info msg="Executing migration" id="create signing_key table"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.086522959Z level=info msg="Migration successfully executed" id="create signing_key table" duration=794.781µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.089805774Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.090707807Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=900.663µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.093646644Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.09462949Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=983.296µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.097814412Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.09811513Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=301.188µs
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.101603321Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.108086069Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.485388ms
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.110086112Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.111052776Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=967.384µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.113307946Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.11421952Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=912.015µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.116063857Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.116998322Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=934.535µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.119672852Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.120685987Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.013255ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.123678715Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.124677491Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=998.506µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.129728693Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.130659937Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=931.064µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.134368974Z level=info msg="Executing migration" id="create sso_setting table"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.135249977Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=880.783µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.138416519Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.139088006Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=672.097µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.140837932Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.141114719Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=277.357µs
Dec  7 14:54:48 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 2 peering, 1 active+clean+scrubbing, 4 unknown, 1 active+clean+scrubbing+deep, 329 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  7 14:54:48 np0005549633 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.nywreh for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:48 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.307053868Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.307788678Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=739µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.311932706Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.328276711Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=16.342386ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.331116275Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 87 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[75,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 87 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[75,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 87 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=73/73 les/c/f=74/74/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 87 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=73/73 les/c/f=74/74/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[73,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 87 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[75,87)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:48 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 87 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=75/75 les/c/f=76/76/0 sis=87) [1]/[2] r=-1 lpr=87 pi=[75,87)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.355811337Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=24.692532ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.35862205Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.359117734Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=495.394µs
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=migrator t=2025-12-07T19:54:48.362222875Z level=info msg="migrations completed" performed=547 skipped=0 duration=9.053494607s
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:48 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=sqlstore t=2025-12-07T19:54:48.364409981Z level=info msg="Created default organization"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=secrets t=2025-12-07T19:54:48.367101141Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=plugin.store t=2025-12-07T19:54:48.406783465Z level=info msg="Loading plugins..."
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: Deploying daemon haproxy.rgw.default.compute-0.nywreh on compute-0
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=local.finder t=2025-12-07T19:54:48.497634929Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=plugin.store t=2025-12-07T19:54:48.49766337Z level=info msg="Plugins loaded" count=55 duration=90.882566ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=query_data t=2025-12-07T19:54:48.500376201Z level=info msg="Query Service initialization"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=live.push_http t=2025-12-07T19:54:48.505325069Z level=info msg="Live Push Gateway initialization"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.migration t=2025-12-07T19:54:48.509240211Z level=info msg=Starting
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.migration t=2025-12-07T19:54:48.50997774Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.migration orgID=1 t=2025-12-07T19:54:48.510831032Z level=info msg="Migrating alerts for organisation"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.migration orgID=1 t=2025-12-07T19:54:48.512164557Z level=info msg="Alerts found to migrate" alerts=0
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.migration t=2025-12-07T19:54:48.515319019Z level=info msg="Completed alerting migration"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.state.manager t=2025-12-07T19:54:48.552144457Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=infra.usagestats.collector t=2025-12-07T19:54:48.555713131Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=provisioning.datasources t=2025-12-07T19:54:48.558026301Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=provisioning.alerting t=2025-12-07T19:54:48.577255861Z level=info msg="starting to provision alerting"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=provisioning.alerting t=2025-12-07T19:54:48.577285962Z level=info msg="finished to provision alerting"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=grafanaStorageLogger t=2025-12-07T19:54:48.577459097Z level=info msg="Storage starting"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.state.manager t=2025-12-07T19:54:48.578202626Z level=info msg="Warming state cache for startup"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.multiorg.alertmanager t=2025-12-07T19:54:48.578905805Z level=info msg="Starting MultiOrg Alertmanager"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=http.server t=2025-12-07T19:54:48.582349364Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=http.server t=2025-12-07T19:54:48.583218337Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  7 14:54:48 np0005549633 podman[98370]: 2025-12-07 19:54:48.604719306 +0000 UTC m=+0.074900820 container create ee42c94de0f816d30945b600ff72cf31a08670d453e95ae74b440beead9e8de5 (image=quay.io/ceph/haproxy:2.3, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-rgw-default-compute-0-nywreh)
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=provisioning.dashboard t=2025-12-07T19:54:48.650346604Z level=info msg="starting to provision dashboards"
Dec  7 14:54:48 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3d489609fd970ad176d7dfcab70adf3a314b33b8576a51950891ce70a0d6a7d/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:48 np0005549633 podman[98370]: 2025-12-07 19:54:48.571892912 +0000 UTC m=+0.042074466 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  7 14:54:48 np0005549633 podman[98370]: 2025-12-07 19:54:48.672460009 +0000 UTC m=+0.142641543 container init ee42c94de0f816d30945b600ff72cf31a08670d453e95ae74b440beead9e8de5 (image=quay.io/ceph/haproxy:2.3, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-rgw-default-compute-0-nywreh)
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.state.manager t=2025-12-07T19:54:48.675799026Z level=info msg="State cache has been initialized" states=0 duration=97.59383ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ngalert.scheduler t=2025-12-07T19:54:48.675851288Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=ticker t=2025-12-07T19:54:48.67592565Z level=info msg=starting first_tick=2025-12-07T19:54:50Z
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=sqlstore.transactions t=2025-12-07T19:54:48.676828354Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 14:54:48 np0005549633 podman[98370]: 2025-12-07 19:54:48.678537738 +0000 UTC m=+0.148719252 container start ee42c94de0f816d30945b600ff72cf31a08670d453e95ae74b440beead9e8de5 (image=quay.io/ceph/haproxy:2.3, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-rgw-default-compute-0-nywreh)
Dec  7 14:54:48 np0005549633 bash[98370]: ee42c94de0f816d30945b600ff72cf31a08670d453e95ae74b440beead9e8de5
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=plugins.update.checker t=2025-12-07T19:54:48.690792097Z level=info msg="Update check succeeded" duration=105.454146ms
Dec  7 14:54:48 np0005549633 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.nywreh for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-rgw-default-compute-0-nywreh[98393]: [NOTICE] 340/195448 (2) : New worker #1 (4) forked
Dec  7 14:54:48 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=sqlstore.transactions t=2025-12-07T19:54:48.740437569Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=sqlstore.transactions t=2025-12-07T19:54:48.751142208Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=sqlstore.transactions t=2025-12-07T19:54:48.764679919Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=grafana.update.checker t=2025-12-07T19:54:48.799099456Z level=info msg="Update check succeeded" duration=214.923355ms
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:48 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:48 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=provisioning.dashboard t=2025-12-07T19:54:48.995694522Z level=info msg="finished to provision dashboards"
Dec  7 14:54:48 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec  7 14:54:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  7 14:54:49 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=grafana-apiserver t=2025-12-07T19:54:49.272044196Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  7 14:54:49 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-grafana-compute-0[98097]: logger=grafana-apiserver t=2025-12-07T19:54:49.272450656Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  7 14:54:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 14:54:49 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=4 ec=61/47 lis/c=86/75 les/c/f=87/76/0 sis=88) [1] r=0 lpr=88 pi=[75,88)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=4 ec=61/47 lis/c=86/75 les/c/f=87/76/0 sis=88) [1] r=0 lpr=88 pi=[75,88)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:49 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=86/68 les/c/f=87/69/0 sis=88) [1] r=0 lpr=88 pi=[68,88)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=86/68 les/c/f=87/69/0 sis=88) [1] r=0 lpr=88 pi=[68,88)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=86/70 les/c/f=87/71/0 sis=88) [1] r=0 lpr=88 pi=[70,88)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=86/70 les/c/f=87/71/0 sis=88) [1] r=0 lpr=88 pi=[70,88)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=86/68 les/c/f=87/69/0 sis=88) [1] r=0 lpr=88 pi=[68,88)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:49 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 88 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=86/68 les/c/f=87/69/0 sis=88) [1] r=0 lpr=88 pi=[68,88)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:49 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:49 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.rzayey on compute-2
Dec  7 14:54:49 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.rzayey on compute-2
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 2 peering, 1 active+clean+scrubbing, 4 unknown, 1 active+clean+scrubbing+deep, 329 active+clean; 455 KiB data, 129 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] scanning for idle connections..
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: [volumes INFO mgr_util] cleaning up connections: []
Dec  7 14:54:50 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:50 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:50 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:50 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:50 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=1.681042552s ======
Dec  7 14:54:50 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:54:48.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=1.681042552s
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:50.813603) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137290813823, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7965, "num_deletes": 251, "total_data_size": 15898737, "memory_usage": 16636256, "flush_reason": "Manual Compaction"}
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  7 14:54:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 89 pg[10.1e( v 53'1163 (0'0,53'1163] local-lis/les=88/89 n=5 ec=61/47 lis/c=86/70 les/c/f=87/71/0 sis=88) [1] r=0 lpr=88 pi=[70,88)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 89 pg[10.e( v 53'1163 (0'0,53'1163] local-lis/les=88/89 n=5 ec=61/47 lis/c=86/68 les/c/f=87/69/0 sis=88) [1] r=0 lpr=88 pi=[68,88)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 89 pg[10.16( v 53'1163 (0'0,53'1163] local-lis/les=88/89 n=4 ec=61/47 lis/c=86/75 les/c/f=87/76/0 sis=88) [1] r=0 lpr=88 pi=[75,88)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:50 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 89 pg[10.6( v 53'1163 (0'0,53'1163] local-lis/les=88/89 n=6 ec=61/47 lis/c=86/68 les/c/f=87/69/0 sis=88) [1] r=0 lpr=88 pi=[68,88)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:50 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:50 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:50 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 29 completed events
Dec  7 14:54:50 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137291249912, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 13795819, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 8102, "table_properties": {"data_size": 13766112, "index_size": 19096, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 92636, "raw_average_key_size": 24, "raw_value_size": 13692933, "raw_average_value_size": 3594, "num_data_blocks": 843, "num_entries": 3809, "num_filter_entries": 3809, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765136917, "oldest_key_time": 1765136917, "file_creation_time": 1765137290, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63acacd7-c601-437a-ae8a-58b144664c23", "db_session_id": "ORNL7KHN9J7Q3V6MXI96", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 436394 microseconds, and 54836 cpu microseconds.
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.250023) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 13795819 bytes OK
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.250053) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.251979) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.252004) EVENT_LOG_v1 {"time_micros": 1765137291251997, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.252032) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 15862375, prev total WAL file size 15875449, number of live WAL files 2.
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.258051) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(13MB) 13(57KB) 8(1944B)]
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137291258218, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 13856257, "oldest_snapshot_seqno": -1}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:51 np0005549633 ceph-mgr[74680]: [progress WARNING root] Starting Global Recovery Event,9 pgs not in active + clean state
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3626 keys, 13809883 bytes, temperature: kUnknown
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137291378342, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 13809883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13780581, "index_size": 19144, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9093, "raw_key_size": 90781, "raw_average_key_size": 25, "raw_value_size": 13709108, "raw_average_value_size": 3780, "num_data_blocks": 847, "num_entries": 3626, "num_filter_entries": 3626, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765136915, "oldest_key_time": 0, "file_creation_time": 1765137291, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63acacd7-c601-437a-ae8a-58b144664c23", "db_session_id": "ORNL7KHN9J7Q3V6MXI96", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.378727) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 13809883 bytes
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.380074) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.2 rd, 114.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(13.2, 0.0 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3918, records dropped: 292 output_compression: NoCompression
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.380111) EVENT_LOG_v1 {"time_micros": 1765137291380094, "job": 4, "event": "compaction_finished", "compaction_time_micros": 120235, "compaction_time_cpu_micros": 54770, "output_level": 6, "num_output_files": 1, "total_output_size": 13809883, "num_input_records": 3918, "num_output_records": 3626, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137291384734, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137291384827, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137291384885, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:51.257783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: Deploying daemon haproxy.rgw.default.compute-2.rzayey on compute-2
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  7 14:54:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 90 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=87/75 les/c/f=88/76/0 sis=90) [1] r=0 lpr=90 pi=[75,90)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 90 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=87/73 les/c/f=88/74/0 sis=90) [1] r=0 lpr=90 pi=[73,90)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 90 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=87/75 les/c/f=88/76/0 sis=90) [1] r=0 lpr=90 pi=[75,90)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 90 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=87/75 les/c/f=88/76/0 sis=90) [1] r=0 lpr=90 pi=[75,90)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 90 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=87/75 les/c/f=88/76/0 sis=90) [1] r=0 lpr=90 pi=[75,90)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:51 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 90 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=5 ec=61/47 lis/c=87/73 les/c/f=88/74/0 sis=90) [1] r=0 lpr=90 pi=[73,90)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:51 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:51 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:51 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:54:51.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 14:54:51 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec  7 14:54:52 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:52 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:52 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:52 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:52 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:52 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.xqsbba on compute-2
Dec  7 14:54:52 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.xqsbba on compute-2
Dec  7 14:54:52 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v129: 337 pgs: 1 active+recovering+remapped, 2 activating+remapped, 334 active+clean; 455 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 17/226 objects misplaced (7.522%)
Dec  7 14:54:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:52 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:52 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:52 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:52 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:52 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:54:52.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:54:52 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  7 14:54:52 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:52 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efb9c003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:53 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:53 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:53 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:53 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:53 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  7 14:54:53 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  7 14:54:53 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 91 pg[10.f( v 53'1163 (0'0,53'1163] local-lis/les=90/91 n=6 ec=61/47 lis/c=87/75 les/c/f=88/76/0 sis=90) [1] r=0 lpr=90 pi=[75,90)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:53 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 91 pg[10.7( v 53'1163 (0'0,53'1163] local-lis/les=90/91 n=6 ec=61/47 lis/c=87/75 les/c/f=88/76/0 sis=90) [1] r=0 lpr=90 pi=[75,90)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:53 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 91 pg[10.1f( v 53'1163 (0'0,53'1163] local-lis/les=90/91 n=5 ec=61/47 lis/c=87/73 les/c/f=88/74/0 sis=90) [1] r=0 lpr=90 pi=[73,90)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:53 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:53 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:53 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:54:53.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:54:54 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec  7 14:54:54 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 1 active+recovering+remapped, 2 activating+remapped, 334 active+clean; 455 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 17/226 objects misplaced (7.522%)
Dec  7 14:54:54 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:54 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:54 np0005549633 ceph-mon[74384]: Deploying daemon keepalived.rgw.default.compute-2.xqsbba on compute-2
Dec  7 14:54:54 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec  7 14:54:54 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:54 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:54 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba4003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:54 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:54 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:54 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:54:54.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:54:54 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:54 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:55 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec  7 14:54:55 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec  7 14:54:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  7 14:54:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  7 14:54:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:55 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 14:54:55 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:55 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:55 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:55 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:55 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:55 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.hpdinp on compute-0
Dec  7 14:54:55 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.hpdinp on compute-0
Dec  7 14:54:55 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:55 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 14:54:55 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:54:55.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 14:54:56 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec  7 14:54:56 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec  7 14:54:56 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v132: 337 pgs: 1 active+recovering+remapped, 2 activating+remapped, 334 active+clean; 455 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 17/226 objects misplaced (7.522%)
Dec  7 14:54:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:56 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:56 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  7 14:54:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:56 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:56 np0005549633 podman[98507]: 2025-12-07 19:54:56.373085773 +0000 UTC m=+0.060590477 container create 15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_nobel, io.openshift.expose-services=, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, architecture=x86_64, version=2.2.4)
Dec  7 14:54:56 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:56 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:56 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:54:56.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:54:56 np0005549633 systemd[1]: Started libpod-conmon-15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110.scope.
Dec  7 14:54:56 np0005549633 podman[98507]: 2025-12-07 19:54:56.342697173 +0000 UTC m=+0.030201937 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 14:54:56 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:54:56 np0005549633 podman[98507]: 2025-12-07 19:54:56.483832946 +0000 UTC m=+0.171337700 container init 15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_nobel, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, architecture=x86_64, name=keepalived, com.redhat.component=keepalived-container, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  7 14:54:56 np0005549633 podman[98507]: 2025-12-07 19:54:56.496476955 +0000 UTC m=+0.183981659 container start 15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_nobel, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, version=2.2.4, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  7 14:54:56 np0005549633 podman[98507]: 2025-12-07 19:54:56.501321532 +0000 UTC m=+0.188826296 container attach 15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_nobel, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, description=keepalived for Ceph)
Dec  7 14:54:56 np0005549633 trusting_nobel[98524]: 0 0
Dec  7 14:54:56 np0005549633 systemd[1]: libpod-15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110.scope: Deactivated successfully.
Dec  7 14:54:56 np0005549633 podman[98507]: 2025-12-07 19:54:56.505411578 +0000 UTC m=+0.192916322 container died 15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_nobel, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, build-date=2023-02-22T09:23:20)
Dec  7 14:54:56 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:56 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:56 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:56 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  7 14:54:56 np0005549633 ceph-mon[74384]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  7 14:54:56 np0005549633 ceph-mon[74384]: Deploying daemon keepalived.rgw.default.compute-0.hpdinp on compute-0
Dec  7 14:54:56 np0005549633 systemd[1]: var-lib-containers-storage-overlay-b9629192e16ce5715bf00f8ace63e0fb6aff9d28491156e1ea04ebb12766ae9a-merged.mount: Deactivated successfully.
Dec  7 14:54:56 np0005549633 podman[98507]: 2025-12-07 19:54:56.568653934 +0000 UTC m=+0.256158638 container remove 15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_nobel, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, description=keepalived for Ceph, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vendor=Red Hat, Inc.)
Dec  7 14:54:56 np0005549633 systemd[1]: libpod-conmon-15250dfd098f7f06ec1c9c301c372be208a1d1a71342c57443ab8abfad9fa110.scope: Deactivated successfully.
Dec  7 14:54:56 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:56 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:56 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:56 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:56 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:57 np0005549633 systemd[1]: Reloading.
Dec  7 14:54:57 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:54:57 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:54:57 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec  7 14:54:57 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec  7 14:54:57 np0005549633 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.hpdinp for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:54:57 np0005549633 podman[98670]: 2025-12-07 19:54:57.656295194 +0000 UTC m=+0.052178119 container create 918903a85b9173ec1b0e5a9707b2b8a130e1b09418ff2b4bf6334603eced9882 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Dec  7 14:54:57 np0005549633 podman[98670]: 2025-12-07 19:54:57.633695126 +0000 UTC m=+0.029578051 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  7 14:54:57 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8058ce5d4f51bfe91d421ddea750f748a009fb62c7857d041037e1264df3bb86/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  7 14:54:57 np0005549633 podman[98670]: 2025-12-07 19:54:57.75874458 +0000 UTC m=+0.154627505 container init 918903a85b9173ec1b0e5a9707b2b8a130e1b09418ff2b4bf6334603eced9882 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec  7 14:54:57 np0005549633 podman[98670]: 2025-12-07 19:54:57.768471743 +0000 UTC m=+0.164354638 container start 918903a85b9173ec1b0e5a9707b2b8a130e1b09418ff2b4bf6334603eced9882 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, version=2.2.4)
Dec  7 14:54:57 np0005549633 bash[98670]: 918903a85b9173ec1b0e5a9707b2b8a130e1b09418ff2b4bf6334603eced9882
Dec  7 14:54:57 np0005549633 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.hpdinp for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  7 14:54:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: Starting VRRP child process, pid=4
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: Startup complete
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:57 2025: (VI_0) Entering BACKUP STATE
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: (VI_0) Entering BACKUP STATE (init)
Dec  7 14:54:57 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:57 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:57 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:54:57.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:54:57 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:57 2025: VRRP_Script(check_backend) succeeded
Dec  7 14:54:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:54:57 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:57 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:58 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev d7abc218-59f3-4a9a-85e6-a313845637d1 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  7 14:54:58 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event d7abc218-59f3-4a9a-85e6-a313845637d1 (Updating ingress.rgw.default deployment (+4 -> 4)) in 12 seconds
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:58 np0005549633 ceph-mgr[74680]: [progress INFO root] update: starting ev eeccc26c-8076-438f-baad-9b35322952c1 (Updating prometheus deployment (+1 -> 1))
Dec  7 14:54:58 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 337 active+clean; 455 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 14:54:58 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec  7 14:54:58 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec  7 14:54:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:58 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc40012b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:58 np0005549633 ceph-mgr[74680]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec  7 14:54:58 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec  7 14:54:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:58 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:58 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:58 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:58 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:54:58.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:54:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:58 2025: (VI_0) Entering MASTER STATE
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 14:54:58 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  7 14:54:58 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:58 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  7 14:54:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:54:59 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec  7 14:54:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:54:59 2025: (VI_0) received an invalid passwd!
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 92 pg[6.8( empty local-lis/les=0/0 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 92 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=7 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=15.280008316s) [0] r=-1 lpr=92 pi=[61,92)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 297.453765869s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 92 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=7 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=15.279960632s) [0] r=-1 lpr=92 pi=[61,92)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 297.453765869s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 92 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=15.277982712s) [0] r=-1 lpr=92 pi=[61,92)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 297.453002930s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 92 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=92 pruub=15.277926445s) [0] r=-1 lpr=92 pi=[61,92)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 297.453002930s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 93 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=93) [0]/[1] r=0 lpr=93 pi=[61,93)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 93 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=93) [0]/[1] r=0 lpr=93 pi=[61,93)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 93 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=7 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=93) [0]/[1] r=0 lpr=93 pi=[61,93)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 93 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=61/62 n=7 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=93) [0]/[1] r=0 lpr=93 pi=[61,93)/1 crt=53'1163 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  7 14:54:59 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 93 pg[6.8( v 53'39 (0'0,53'39] local-lis/les=92/93 n=0 ec=56/22 lis/c=56/56 les/c/f=57/57/0 sis=92) [1] r=0 lpr=92 pi=[56,92)/1 crt=53'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:54:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:59 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  7 14:54:59 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:54:59 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.381752) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137299381852, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 501, "num_deletes": 254, "total_data_size": 415906, "memory_usage": 426696, "flush_reason": "Manual Compaction"}
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137299388863, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 401586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8104, "largest_seqno": 8603, "table_properties": {"data_size": 398677, "index_size": 881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6552, "raw_average_key_size": 17, "raw_value_size": 392521, "raw_average_value_size": 1030, "num_data_blocks": 38, "num_entries": 381, "num_filter_entries": 381, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765137290, "oldest_key_time": 1765137290, "file_creation_time": 1765137299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63acacd7-c601-437a-ae8a-58b144664c23", "db_session_id": "ORNL7KHN9J7Q3V6MXI96", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 7155 microseconds, and 3573 cpu microseconds.
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.388913) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 401586 bytes OK
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.388934) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.390622) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.390644) EVENT_LOG_v1 {"time_micros": 1765137299390638, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.390666) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 412877, prev total WAL file size 412877, number of live WAL files 2.
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.391249) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323535' seq:0, type:0; will stop at (end)
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(392KB)], [20(13MB)]
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137299391309, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 14211469, "oldest_snapshot_seqno": -1}
Dec  7 14:54:59 np0005549633 systemd-logind[797]: New session 37 of user zuul.
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3480 keys, 13775983 bytes, temperature: kUnknown
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137299519897, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 13775983, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13747426, "index_size": 18783, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 89769, "raw_average_key_size": 25, "raw_value_size": 13678117, "raw_average_value_size": 3930, "num_data_blocks": 814, "num_entries": 3480, "num_filter_entries": 3480, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765136915, "oldest_key_time": 0, "file_creation_time": 1765137299, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "63acacd7-c601-437a-ae8a-58b144664c23", "db_session_id": "ORNL7KHN9J7Q3V6MXI96", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.520241) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13775983 bytes
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.521779) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.4 rd, 107.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.2 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(69.7) write-amplify(34.3) OK, records in: 4007, records dropped: 527 output_compression: NoCompression
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.521809) EVENT_LOG_v1 {"time_micros": 1765137299521796, "job": 6, "event": "compaction_finished", "compaction_time_micros": 128715, "compaction_time_cpu_micros": 54441, "output_level": 6, "num_output_files": 1, "total_output_size": 13775983, "num_input_records": 4007, "num_output_records": 3480, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137299522068, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765137299525996, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.391140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.526097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.526106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.526109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.526112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 14:54:59 np0005549633 ceph-mon[74384]: rocksdb: (Original Log Time 2025/12/07-19:54:59.526115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  7 14:54:59 np0005549633 systemd[1]: Started Session 37 of User zuul.
Dec  7 14:54:59 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:54:59 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:54:59 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:54:59.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: Deploying daemon prometheus.compute-0 on compute-0
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  7 14:55:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:55:00 2025: (VI_0) received an invalid passwd!
Dec  7 14:55:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:55:00 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec  7 14:55:00 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 455 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 14:55:00 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Dec  7 14:55:00 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  7 14:55:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:00 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  7 14:55:00 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  7 14:55:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 94 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=93/94 n=7 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=93) [0]/[1] async=[0] r=0 lpr=93 pi=[61,93)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:55:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 94 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=93/94 n=4 ec=61/47 lis/c=61/61 les/c/f=62/62/0 sis=93) [0]/[1] async=[0] r=0 lpr=93 pi=[61,93)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:55:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:00 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc40026c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:00 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:00 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.001000025s ======
Dec  7 14:55:00 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:00.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Dec  7 14:55:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 94 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=72/72 les/c/f=73/73/0 sis=94) [1] r=0 lpr=94 pi=[72,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:55:00 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 94 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=74/74 les/c/f=75/75/0 sis=94) [1] r=0 lpr=94 pi=[74,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:55:00 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:00 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc40026c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:01 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-nfs-cephfs-compute-0-hbjfrz[97131]: Sun Dec  7 19:55:01 2025: (VI_0) received an invalid passwd!
Dec  7 14:55:01 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:55:01 2025: (VI_0) received lower priority (90) advert from 192.168.122.102 - discarding
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  7 14:55:01 np0005549633 ceph-mgr[74680]: [progress INFO root] Writing back 30 completed events
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:01 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event 00e06fe2-b132-412f-b1a9-e21adb9dd9ee (Global Recovery Event) in 10 seconds
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  7 14:55:01 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=93/94 n=4 ec=61/47 lis/c=93/61 les/c/f=94/62/0 sis=95 pruub=14.831833839s) [0] async=[0] r=-1 lpr=95 pi=[61,95)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 299.397644043s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.18( v 53'1163 (0'0,53'1163] local-lis/les=93/94 n=4 ec=61/47 lis/c=93/61 les/c/f=94/62/0 sis=95 pruub=14.831728935s) [0] r=-1 lpr=95 pi=[61,95)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 299.397644043s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=93/94 n=7 ec=61/47 lis/c=93/61 les/c/f=94/62/0 sis=95 pruub=14.829203606s) [0] async=[0] r=-1 lpr=95 pi=[61,95)/1 crt=53'1163 lcod 0'0 mlcod 0'0 active pruub 299.395385742s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.8( v 53'1163 (0'0,53'1163] local-lis/les=93/94 n=7 ec=61/47 lis/c=93/61 les/c/f=94/62/0 sis=95 pruub=14.829131126s) [0] r=-1 lpr=95 pi=[61,95)/1 crt=53'1163 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 299.395385742s@ mbc={}] state<Start>: transitioning to Stray
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[72,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=74/74 les/c/f=75/75/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[74,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:55:01 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 95 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/47 lis/c=74/74 les/c/f=75/75/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[74,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  7 14:55:01 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-keepalived-rgw-default-compute-0-hpdinp[98684]: Sun Dec  7 19:55:01 2025: (VI_0) Entering MASTER STATE
Dec  7 14:55:01 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:01 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:01 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:55:01.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:02 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec  7 14:55:02 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec  7 14:55:02 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 2 unknown, 2 peering, 1 active+clean+scrubbing, 332 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 14:55:02 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:02 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:02 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:02 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  7 14:55:02 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:02 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:02 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:02 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:02 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:02.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:02 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  7 14:55:02 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.689258623 +0000 UTC m=+3.734299489 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.716480372 +0000 UTC m=+3.761521258 volume create be31bf3684fc84a54c61fc4474d7cdc41dd03e4d8588fa18766e648b9629dc5c
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.735082755 +0000 UTC m=+3.780123631 container create e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da (image=quay.io/prometheus/prometheus:v2.51.0, name=interesting_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:02 np0005549633 systemd[1]: Started libpod-conmon-e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da.scope.
Dec  7 14:55:02 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:55:02 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6447d8128102f0e79da49d7025538993c5d733cac9fc52e2933dd4bd9d65c18f/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.845468689 +0000 UTC m=+3.890509615 container init e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da (image=quay.io/prometheus/prometheus:v2.51.0, name=interesting_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.856412924 +0000 UTC m=+3.901453810 container start e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da (image=quay.io/prometheus/prometheus:v2.51.0, name=interesting_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:02 np0005549633 interesting_franklin[99123]: 65534 65534
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.860704135 +0000 UTC m=+3.905745001 container attach e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da (image=quay.io/prometheus/prometheus:v2.51.0, name=interesting_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:02 np0005549633 systemd[1]: libpod-e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da.scope: Deactivated successfully.
Dec  7 14:55:02 np0005549633 conmon[99123]: conmon e33c6933ca8a72910e45 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da.scope/container/memory.events
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.863062357 +0000 UTC m=+3.908103203 container died e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da (image=quay.io/prometheus/prometheus:v2.51.0, name=interesting_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:02 np0005549633 systemd[1]: var-lib-containers-storage-overlay-6447d8128102f0e79da49d7025538993c5d733cac9fc52e2933dd4bd9d65c18f-merged.mount: Deactivated successfully.
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.923180661 +0000 UTC m=+3.968221507 container remove e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da (image=quay.io/prometheus/prometheus:v2.51.0, name=interesting_franklin, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:02 np0005549633 podman[98787]: 2025-12-07 19:55:02.928236653 +0000 UTC m=+3.973277489 volume remove be31bf3684fc84a54c61fc4474d7cdc41dd03e4d8588fa18766e648b9629dc5c
Dec  7 14:55:02 np0005549633 systemd[1]: libpod-conmon-e33c6933ca8a72910e45994376d4a171ec80739c7dcca56a7598fc5b469355da.scope: Deactivated successfully.
Dec  7 14:55:02 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:02 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc40026c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.029840688 +0000 UTC m=+0.054052368 volume create 4f9aa6731694748ea11b183d62ed0405a8cc37c14372038c523eab26fbcdb916
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.038510173 +0000 UTC m=+0.062721853 container create 06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345 (image=quay.io/prometheus/prometheus:v2.51.0, name=nervous_ishizaka, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:03 np0005549633 systemd[1]: Started libpod-conmon-06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345.scope.
Dec  7 14:55:03 np0005549633 systemd[1]: Started libcrun container.
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.01339301 +0000 UTC m=+0.037604710 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  7 14:55:03 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b55271ec8d01ebbce21678998da126386cc3d733bb479704f785d2ae71019b4/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.123954697 +0000 UTC m=+0.148166427 container init 06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345 (image=quay.io/prometheus/prometheus:v2.51.0, name=nervous_ishizaka, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:03 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.13289954 +0000 UTC m=+0.157111270 container start 06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345 (image=quay.io/prometheus/prometheus:v2.51.0, name=nervous_ishizaka, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:03 np0005549633 nervous_ishizaka[99171]: 65534 65534
Dec  7 14:55:03 np0005549633 systemd[1]: libpod-06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345.scope: Deactivated successfully.
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.138342972 +0000 UTC m=+0.162554772 container attach 06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345 (image=quay.io/prometheus/prometheus:v2.51.0, name=nervous_ishizaka, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:03 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.138904167 +0000 UTC m=+0.163115907 container died 06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345 (image=quay.io/prometheus/prometheus:v2.51.0, name=nervous_ishizaka, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:03 np0005549633 systemd[1]: var-lib-containers-storage-overlay-8b55271ec8d01ebbce21678998da126386cc3d733bb479704f785d2ae71019b4-merged.mount: Deactivated successfully.
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.197675397 +0000 UTC m=+0.221887107 container remove 06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345 (image=quay.io/prometheus/prometheus:v2.51.0, name=nervous_ishizaka, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:03 np0005549633 podman[99143]: 2025-12-07 19:55:03.202758798 +0000 UTC m=+0.226970498 volume remove 4f9aa6731694748ea11b183d62ed0405a8cc37c14372038c523eab26fbcdb916
Dec  7 14:55:03 np0005549633 systemd[1]: libpod-conmon-06a5661a4dfe0b52cd18b4ebf695070f644a1146d999a9d495df920ae133b345.scope: Deactivated successfully.
Dec  7 14:55:03 np0005549633 systemd[1]: Reloading.
Dec  7 14:55:03 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:55:03 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:55:03 np0005549633 systemd[1]: Reloading.
Dec  7 14:55:03 np0005549633 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  7 14:55:03 np0005549633 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  7 14:55:03 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  7 14:55:03 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  7 14:55:03 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:03 np0005549633 systemd[1]: Starting Ceph prometheus.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d...
Dec  7 14:55:03 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:03 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:55:03.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:04 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 2 unknown, 2 peering, 1 active+clean+scrubbing, 332 active+clean; 456 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  7 14:55:04 np0005549633 podman[99395]: 2025-12-07 19:55:04.204672297 +0000 UTC m=+0.067483107 container create 7a159e9c001a8acc1a16cdb7db46f6047bf390016c3e4d1dbd9a7855395797d6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  7 14:55:04 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 97 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=7 ec=61/47 lis/c=95/72 les/c/f=96/73/0 sis=97) [1] r=0 lpr=97 pi=[72,97)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:55:04 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 97 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=7 ec=61/47 lis/c=95/72 les/c/f=96/73/0 sis=97) [1] r=0 lpr=97 pi=[72,97)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:55:04 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 97 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=95/74 les/c/f=96/75/0 sis=97) [1] r=0 lpr=97 pi=[74,97)/1 luod=0'0 crt=53'1163 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  7 14:55:04 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 97 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=0/0 n=6 ec=61/47 lis/c=95/74 les/c/f=96/75/0 sis=97) [1] r=0 lpr=97 pi=[74,97)/1 crt=53'1163 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  7 14:55:04 np0005549633 podman[99395]: 2025-12-07 19:55:04.163542426 +0000 UTC m=+0.026353226 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  7 14:55:04 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe61d5ab28f60559b5f245e35a21ab97319e4651a53a07113edb5e3ae9d7350/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 14:55:04 np0005549633 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe61d5ab28f60559b5f245e35a21ab97319e4651a53a07113edb5e3ae9d7350/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:04 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:04 np0005549633 podman[99395]: 2025-12-07 19:55:04.292719519 +0000 UTC m=+0.155530379 container init 7a159e9c001a8acc1a16cdb7db46f6047bf390016c3e4d1dbd9a7855395797d6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:04 np0005549633 podman[99395]: 2025-12-07 19:55:04.298683954 +0000 UTC m=+0.161494794 container start 7a159e9c001a8acc1a16cdb7db46f6047bf390016c3e4d1dbd9a7855395797d6 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:04 np0005549633 bash[99395]: 7a159e9c001a8acc1a16cdb7db46f6047bf390016c3e4d1dbd9a7855395797d6
Dec  7 14:55:04 np0005549633 systemd[1]: Started Ceph prometheus.compute-0 for a8ac706f-8288-541e-8e56-e1124d9b483d.
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.366Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.367Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.367Z caller=main.go:623 level=info host_details="(Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 x86_64 compute-0 (none))"
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.367Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.367Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:04 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.378Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.380Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.383Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.383Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.386Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.386Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.96µs
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.386Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.386Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.386Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=55.892µs wal_replay_duration=631.357µs wbl_replay_duration=220ns total_replay_duration=730.68µs
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.390Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.390Z caller=main.go:1153 level=info msg="TSDB started"
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.390Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec  7 14:55:04 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:04 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 14:55:04 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:04.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:04 np0005549633 ceph-mgr[74680]: [progress INFO root] complete: finished ev eeccc26c-8076-438f-baad-9b35322952c1 (Updating prometheus deployment (+1 -> 1))
Dec  7 14:55:04 np0005549633 ceph-mgr[74680]: [progress INFO root] Completed event eeccc26c-8076-438f-baad-9b35322952c1 (Updating prometheus deployment (+1 -> 1)) in 6 seconds
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec  7 14:55:04 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.550Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=160.477227ms db_storage=1.44µs remote_storage=2.17µs web_handler=800ns query_engine=1.4µs scrape=107.080277ms scrape_sd=268.847µs notify=26.491µs notify_sd=17.131µs rules=52.259499ms tracing=14.1µs
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.550Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec  7 14:55:04 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-prometheus-compute-0[99417]: ts=2025-12-07T19:55:04.550Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec  7 14:55:05 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:05 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  7 14:55:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 98 pg[10.9( v 53'1163 (0'0,53'1163] local-lis/les=97/98 n=6 ec=61/47 lis/c=95/74 les/c/f=96/75/0 sis=97) [1] r=0 lpr=97 pi=[74,97)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:55:05 np0005549633 ceph-osd[82672]: osd.1 pg_epoch: 98 pg[10.19( v 53'1163 (0'0,53'1163] local-lis/les=97/98 n=7 ec=61/47 lis/c=95/72 les/c/f=96/73/0 sis=97) [1] r=0 lpr=97 pi=[72,97)/1 crt=53'1163 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  1: '-n'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  2: 'mgr.compute-0.dyzcyj'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  3: '-f'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  4: '--setuser'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  5: 'ceph'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  6: '--setgroup'
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr respawn  7: 'ceph'
Dec  7 14:55:05 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.dyzcyj(active, since 2m), standbys: compute-1.cgejnh, compute-2.orbdku
Dec  7 14:55:05 np0005549633 systemd[1]: session-35.scope: Deactivated successfully.
Dec  7 14:55:05 np0005549633 systemd[1]: session-35.scope: Consumed 1min 14ms CPU time.
Dec  7 14:55:05 np0005549633 systemd-logind[797]: Session 35 logged out. Waiting for processes to exit.
Dec  7 14:55:05 np0005549633 systemd-logind[797]: Removed session 35.
Dec  7 14:55:05 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setuser ceph since I am not root
Dec  7 14:55:05 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: ignoring --setgroup ceph since I am not root
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: pidfile_write: ignore empty --pid-file
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'alerts'
Dec  7 14:55:05 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:05.884+0000 7f64e3701140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'balancer'
Dec  7 14:55:05 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:05 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 14:55:05 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:55:05.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 14:55:05 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:05.964+0000 7f64e3701140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  7 14:55:05 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'cephadm'
Dec  7 14:55:06 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec  7 14:55:06 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec  7 14:55:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:06 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc40026c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:06 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:06 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:06 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:06 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:06.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:06 np0005549633 ceph-mon[74384]: from='mgr.14427 192.168.122.100:0/4281970455' entity='mgr.compute-0.dyzcyj' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  7 14:55:06 np0005549633 ovs-vsctl[99493]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  7 14:55:06 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'crash'
Dec  7 14:55:06 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:06.767+0000 7f64e3701140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:55:06 np0005549633 ceph-mgr[74680]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  7 14:55:06 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'dashboard'
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:07 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004140 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'devicehealth'
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:07.387+0000 7f64e3701140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'diskprediction_local'
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]:  from numpy import show_config as show_numpy_config
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:07.557+0000 7f64e3701140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'influx'
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:07.629+0000 7f64e3701140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'insights'
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'iostat'
Dec  7 14:55:07 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:07.771+0000 7f64e3701140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  7 14:55:07 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'k8sevents'
Dec  7 14:55:07 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:07 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.002000051s ======
Dec  7 14:55:07 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:55:07.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Dec  7 14:55:08 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'localpool'
Dec  7 14:55:08 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mds_autoscaler'
Dec  7 14:55:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:08 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:08 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4003bb0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:08 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:08 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:08 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:08.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:08 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'mirroring'
Dec  7 14:55:08 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'nfs'
Dec  7 14:55:08 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:08.823+0000 7f64e3701140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:55:08 np0005549633 ceph-mgr[74680]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  7 14:55:08 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'orchestrator'
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:09 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [WARNING] 340/195509 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:09.053+0000 7f64e3701140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_perf_query'
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:09.152+0000 7f64e3701140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'osd_support'
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:09.224+0000 7f64e3701140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'pg_autoscaler'
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:09.306+0000 7f64e3701140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'progress'
Dec  7 14:55:09 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:09.382+0000 7f64e3701140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'prometheus'
Dec  7 14:55:09 np0005549633 lvm[100135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  7 14:55:09 np0005549633 lvm[100135]: VG ceph_vg0 finished
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff[96441]: [WARNING] 340/195509 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:09.743+0000 7f64e3701140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rbd_support'
Dec  7 14:55:09 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:09.849+0000 7f64e3701140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  7 14:55:09 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'restful'
Dec  7 14:55:09 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:09 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:09 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:55:09.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:10 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec  7 14:55:10 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec  7 14:55:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rgw'
Dec  7 14:55:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:10 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:10.303+0000 7f64e3701140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:55:10 np0005549633 ceph-mgr[74680]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  7 14:55:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'rook'
Dec  7 14:55:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:10 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4003bb0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:10 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:10 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:10 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:10.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:10.916+0000 7f64e3701140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:55:10 np0005549633 ceph-mgr[74680]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  7 14:55:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'selftest'
Dec  7 14:55:10 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:10.990+0000 7f64e3701140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:55:10 np0005549633 ceph-mgr[74680]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  7 14:55:10 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'snap_schedule'
Dec  7 14:55:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:11 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4003bb0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:11 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec  7 14:55:11 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec  7 14:55:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:11.073+0000 7f64e3701140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'stats'
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'status'
Dec  7 14:55:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:11.221+0000 7f64e3701140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telegraf'
Dec  7 14:55:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:11.289+0000 7f64e3701140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'telemetry'
Dec  7 14:55:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:11.490+0000 7f64e3701140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'test_orchestrator'
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:11.721+0000 7f64e3701140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'volumes'
Dec  7 14:55:11 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh restarted
Dec  7 14:55:11 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.cgejnh started
Dec  7 14:55:11 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:11 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:11 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:55:11.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:11 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:11.993+0000 7f64e3701140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  7 14:55:11 np0005549633 ceph-mgr[74680]: mgr[py] Loading python module 'zabbix'
Dec  7 14:55:12 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku restarted
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.orbdku started
Dec  7 14:55:12 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:12.059+0000 7f64e3701140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.dyzcyj
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: ms_deliver_dispatch: unhandled message 0x55b12e025860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map Activating!
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.dyzcyj(active, starting, since 0.0352226s), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr handle_mgr_map I am now activating
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:12 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbc4003bb0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: balancer
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [balancer INFO root] Starting
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [balancer INFO root] Optimize plan auto_2025-12-07_19:55:12
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(cluster) log [INF] : Manager daemon compute-0.dyzcyj is now available
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: cephadm
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: crash
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: dashboard
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO sso] Loading SSO DB version=1
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:12 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbcc0049c0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: devicehealth
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Starting
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: iostat
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: nfs
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: orchestrator
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: pg_autoscaler
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: progress
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [progress INFO root] Loading...
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f6461a38880>, <progress.module.GhostEvent object at 0x7f6461a38a30>, <progress.module.GhostEvent object at 0x7f6461a38a60>, <progress.module.GhostEvent object at 0x7f6461a38a90>, <progress.module.GhostEvent object at 0x7f6461a38ac0>, <progress.module.GhostEvent object at 0x7f6461a38af0>, <progress.module.GhostEvent object at 0x7f6461a38b20>, <progress.module.GhostEvent object at 0x7f6461a38b50>, <progress.module.GhostEvent object at 0x7f6461a38b80>, <progress.module.GhostEvent object at 0x7f6461a38bb0>, <progress.module.GhostEvent object at 0x7f6461a38be0>, <progress.module.GhostEvent object at 0x7f6461a38c10>, <progress.module.GhostEvent object at 0x7f6461a38c40>, <progress.module.GhostEvent object at 0x7f6461a38c70>, <progress.module.GhostEvent object at 0x7f6461a38ca0>, <progress.module.GhostEvent object at 0x7f6461a38cd0>, <progress.module.GhostEvent object at 0x7f6461a38d00>, <progress.module.GhostEvent object at 0x7f6461a38d30>, <progress.module.GhostEvent object at 0x7f6461a38d60>, <progress.module.GhostEvent object at 0x7f6461a38d90>, <progress.module.GhostEvent object at 0x7f6461a38dc0>, <progress.module.GhostEvent object at 0x7f6461a38df0>, <progress.module.GhostEvent object at 0x7f6461a38e20>, <progress.module.GhostEvent object at 0x7f6461a38e50>, <progress.module.GhostEvent object at 0x7f6461a38e80>, <progress.module.GhostEvent object at 0x7f6461a38eb0>, <progress.module.GhostEvent object at 0x7f6461a38ee0>, <progress.module.GhostEvent object at 0x7f6461a38f10>, <progress.module.GhostEvent object at 0x7f6461a38f40>, <progress.module.GhostEvent object at 0x7f6461a38f70>] historic events
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [progress INFO root] Loaded OSDMap, ready.
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [pg_autoscaler INFO root] _maybe_adjust
Dec  7 14:55:12 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:12 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.001000026s ======
Dec  7 14:55:12 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:12.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:12 np0005549633 systemd[1]: Starting Hostname Service...
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: prometheus
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: [07/Dec/2025:19:55:12] ENGINE Bus STARTING
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: CherryPy Checker:
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: The Application mounted at '' has an empty config.
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO root] Cache enabled
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO root] starting metric collection thread
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO root] Starting engine...
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO cherrypy.error] [07/Dec/2025:19:55:12] ENGINE Bus STARTING
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] recovery thread starting
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] starting setup
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: rbd_support
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: restful
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: status
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: telemetry
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [restful INFO root] server_addr: :: server_port: 8003
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [restful WARNING root] server not running: no certificate configured
Dec  7 14:55:12 np0005549633 systemd[1]: Started Hostname Service.
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: mgr load Constructed class from module: volumes
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:12.611+0000 7f644d021640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:12.617+0000 7f64487d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:12.617+0000 7f64487d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:12.617+0000 7f64487d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:12.617+0000 7f64487d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: 2025-12-07T19:55:12.617+0000 7f64487d8640 -1 client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: client.0 error registering admin socket command: (17) File exists
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:12 np0005549633 systemd-logind[797]: New session 38 of user ceph-admin.
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: [07/Dec/2025:19:55:12] ENGINE Serving on http://:::9283
Dec  7 14:55:12 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mgr-compute-0-dyzcyj[74676]: [07/Dec/2025:19:55:12] ENGINE Bus STARTED
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO cherrypy.error] [07/Dec/2025:19:55:12] ENGINE Serving on http://:::9283
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO cherrypy.error] [07/Dec/2025:19:55:12] ENGINE Bus STARTED
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [prometheus INFO root] Engine started.
Dec  7 14:55:12 np0005549633 systemd[1]: Started Session 38 of User ceph-admin.
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"} v 0)
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] PerfHandler: starting
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TaskHandler: starting
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: Active manager daemon compute-0.dyzcyj restarted
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: Activating manager daemon compute-0.dyzcyj
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: Manager daemon compute-0.dyzcyj is now available
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: from='mgr.14727 192.168.122.100:0/2686128659' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' 
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/mirror_snapshot_schedule"}]: dispatch
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"} v 0)
Dec  7 14:55:12 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  7 14:55:12 np0005549633 ceph-mgr[74680]: [rbd_support INFO root] setup complete
Dec  7 14:55:13 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:13 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 14 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:13 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec  7 14:55:13 np0005549633 ceph-mgr[74680]: [dashboard INFO dashboard.module] Engine started.
Dec  7 14:55:13 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec  7 14:55:13 np0005549633 ceph-mon[74384]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.dyzcyj(active, since 1.5028s), standbys: compute-2.orbdku, compute-1.cgejnh
Dec  7 14:55:13 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:55:13 np0005549633 podman[100760]: 2025-12-07 19:55:13.691756171 +0000 UTC m=+0.079192824 container exec a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  7 14:55:13 np0005549633 podman[100760]: 2025-12-07 19:55:13.790009641 +0000 UTC m=+0.177446294 container exec_died a36e06099c02599ce100319f3e1ca3bb11c317452cbfc38195b5b4d934af8ffd (image=quay.io/ceph/ceph:v19, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  7 14:55:13 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:55:13] ENGINE Bus STARTING
Dec  7 14:55:13 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:55:13] ENGINE Bus STARTING
Dec  7 14:55:13 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:55:13] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:55:13 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:55:13] ENGINE Serving on http://192.168.122.100:8765
Dec  7 14:55:13 np0005549633 ceph-mon[74384]: from='mgr.14727 192.168.122.100:0/2686128659' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:55:13 np0005549633 ceph-mon[74384]: from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.dyzcyj/trash_purge_schedule"}]: dispatch
Dec  7 14:55:13 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:13 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:13 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.102 - anonymous [07/Dec/2025:19:55:13.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:14 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.5 deep-scrub starts
Dec  7 14:55:14 np0005549633 ceph-osd[82672]: log_channel(cluster) log [DBG] : 11.5 deep-scrub ok
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:55:14] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:55:14] ENGINE Serving on https://192.168.122.100:7150
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:55:14] ENGINE Bus STARTED
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:55:14] ENGINE Bus STARTED
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: [cephadm INFO cherrypy.error] [07/Dec/2025:19:55:14] ENGINE Client ('192.168.122.100', 51368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: log_channel(cephadm) log [INF] : [07/Dec/2025:19:55:14] ENGINE Client ('192.168.122.100', 51368) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  7 14:55:14 np0005549633 podman[100993]: 2025-12-07 19:55:14.257819415 +0000 UTC m=+0.100339836 container exec 738fbf4b61e3e049ea6c6ad82a2f478b4ef919ad4cb7a6647209e9c5acce1efb (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:14 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efbac002a80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:14 np0005549633 podman[100993]: 2025-12-07 19:55:14.294870119 +0000 UTC m=+0.137390440 container exec_died 738fbf4b61e3e049ea6c6ad82a2f478b4ef919ad4cb7a6647209e9c5acce1efb (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  7 14:55:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  7 14:55:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  7 14:55:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  7 14:55:14 np0005549633 ceph-mon[74384]: log_channel(audit) log [INF] : from='mgr.14727 ' entity='mgr.compute-0.dyzcyj' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  7 14:55:14 np0005549633 ceph-mon[74384]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  7 14:55:14 np0005549633 ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb[96020]: 07/12/2025 19:55:14 : epoch 6935db29 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7efba8004440 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  7 14:55:14 np0005549633 radosgw[94049]: ====== starting new request req=0x7faf985d15d0 =====
Dec  7 14:55:14 np0005549633 radosgw[94049]: ====== req done req=0x7faf985d15d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  7 14:55:14 np0005549633 radosgw[94049]: beast: 0x7faf985d15d0: 192.168.122.100 - anonymous [07/Dec/2025:19:55:14.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  7 14:55:14 np0005549633 ceph-mgr[74680]: [devicehealth INFO root] Check health
Dec  7 14:55:14 np0005549633 podman[101088]: 2025-12-07 19:55:14.615161586 +0000 UTC m=+0.061151216 container exec f6972ffed0e83c3b514ab9a6b86cb292784ac599aabfb4955f3cd539c79ff04d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  7 14:55:14 np0005549633 podman[101088]: 2025-12-07 19:55:14.627939775 +0000 UTC m=+0.073929375 container exec_died f6972ffed0e83c3b514ab9a6b86cb292784ac599aabfb4955f3cd539c79ff04d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-nfs-cephfs-2-0-compute-0-tkfndb, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  7 14:55:14 np0005549633 podman[101236]: 2025-12-07 19:55:14.831061489 +0000 UTC m=+0.054451257 container exec b8e4b8d0b734345d34b340f6a7237c7040cd2f88995599741bdbda00e6860991 (image=quay.io/ceph/haproxy:2.3, name=ceph-a8ac706f-8288-541e-8e56-e1124d9b483d-haproxy-nfs-cephfs-compute-0-cpclff)
