Oct 09 10:55:21 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 09 10:55:21 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 09 10:55:21 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 10:55:21 localhost kernel: BIOS-provided physical RAM map:
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 09 10:55:21 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 09 10:55:21 localhost kernel: NX (Execute Disable) protection: active
Oct 09 10:55:21 localhost kernel: APIC: Static calls initialized
Oct 09 10:55:21 localhost kernel: SMBIOS 2.8 present.
Oct 09 10:55:21 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 09 10:55:21 localhost kernel: Hypervisor detected: KVM
Oct 09 10:55:21 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 09 10:55:21 localhost kernel: kvm-clock: using sched offset of 2874453794487 cycles
Oct 09 10:55:21 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 09 10:55:21 localhost kernel: tsc: Detected 2800.000 MHz processor
Oct 09 10:55:21 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 09 10:55:21 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 09 10:55:21 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 09 10:55:21 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 09 10:55:21 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 09 10:55:21 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 09 10:55:21 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 09 10:55:21 localhost kernel: Using GB pages for direct mapping
Oct 09 10:55:21 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 09 10:55:21 localhost kernel: ACPI: Early table checksum verification disabled
Oct 09 10:55:21 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 09 10:55:21 localhost kernel: ACPI: RSDT 0x00000000BFFE16C4 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 10:55:21 localhost kernel: ACPI: FACP 0x00000000BFFE1578 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 10:55:21 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F8 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 10:55:21 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 09 10:55:21 localhost kernel: ACPI: APIC 0x00000000BFFE15EC 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 10:55:21 localhost kernel: ACPI: WAET 0x00000000BFFE169C 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 10:55:21 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1578-0xbffe15eb]
Oct 09 10:55:21 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1577]
Oct 09 10:55:21 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 09 10:55:21 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15ec-0xbffe169b]
Oct 09 10:55:21 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe169c-0xbffe16c3]
Oct 09 10:55:21 localhost kernel: No NUMA configuration found
Oct 09 10:55:21 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 09 10:55:21 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 09 10:55:21 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 09 10:55:21 localhost kernel: Zone ranges:
Oct 09 10:55:21 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 09 10:55:21 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 09 10:55:21 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 09 10:55:21 localhost kernel:   Device   empty
Oct 09 10:55:21 localhost kernel: Movable zone start for each node
Oct 09 10:55:21 localhost kernel: Early memory node ranges
Oct 09 10:55:21 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 09 10:55:21 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 09 10:55:21 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 09 10:55:21 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 09 10:55:21 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 09 10:55:21 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 09 10:55:21 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 09 10:55:21 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 09 10:55:21 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 09 10:55:21 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 09 10:55:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 09 10:55:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 09 10:55:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 09 10:55:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 09 10:55:21 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 09 10:55:21 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 09 10:55:21 localhost kernel: TSC deadline timer available
Oct 09 10:55:21 localhost kernel: CPU topo: Max. logical packages:   8
Oct 09 10:55:21 localhost kernel: CPU topo: Max. logical dies:       8
Oct 09 10:55:21 localhost kernel: CPU topo: Max. dies per package:   1
Oct 09 10:55:21 localhost kernel: CPU topo: Max. threads per core:   1
Oct 09 10:55:21 localhost kernel: CPU topo: Num. cores per package:     1
Oct 09 10:55:21 localhost kernel: CPU topo: Num. threads per package:   1
Oct 09 10:55:21 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 09 10:55:21 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 09 10:55:21 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 09 10:55:21 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 09 10:55:21 localhost kernel: Booting paravirtualized kernel on KVM
Oct 09 10:55:21 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 09 10:55:21 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 09 10:55:21 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 09 10:55:21 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 09 10:55:21 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 09 10:55:21 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 09 10:55:21 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 10:55:21 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 09 10:55:21 localhost kernel: random: crng init done
Oct 09 10:55:21 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 09 10:55:21 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 09 10:55:21 localhost kernel: Fallback order for Node 0: 0 
Oct 09 10:55:21 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 09 10:55:21 localhost kernel: Policy zone: Normal
Oct 09 10:55:21 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 09 10:55:21 localhost kernel: software IO TLB: area num 8.
Oct 09 10:55:21 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 09 10:55:21 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 09 10:55:21 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 09 10:55:21 localhost kernel: Dynamic Preempt: voluntary
Oct 09 10:55:21 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 09 10:55:21 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 09 10:55:21 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 09 10:55:21 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 09 10:55:21 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 09 10:55:21 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 09 10:55:21 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 09 10:55:21 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 09 10:55:21 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 09 10:55:21 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 09 10:55:21 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 09 10:55:21 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 09 10:55:21 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 09 10:55:21 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 09 10:55:21 localhost kernel: Console: colour VGA+ 80x25
Oct 09 10:55:21 localhost kernel: printk: console [ttyS0] enabled
Oct 09 10:55:21 localhost kernel: ACPI: Core revision 20230331
Oct 09 10:55:21 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 09 10:55:21 localhost kernel: x2apic enabled
Oct 09 10:55:21 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 09 10:55:21 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 09 10:55:21 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Oct 09 10:55:21 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 09 10:55:21 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 09 10:55:21 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 09 10:55:21 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 09 10:55:21 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 09 10:55:21 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 09 10:55:21 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 09 10:55:21 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 09 10:55:21 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 09 10:55:21 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 09 10:55:21 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 09 10:55:21 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 09 10:55:21 localhost kernel: x86/bugs: return thunk changed
Oct 09 10:55:21 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 09 10:55:21 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 09 10:55:21 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 09 10:55:21 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 09 10:55:21 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 09 10:55:21 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 09 10:55:21 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 09 10:55:21 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 09 10:55:21 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 09 10:55:21 localhost kernel: landlock: Up and running.
Oct 09 10:55:21 localhost kernel: Yama: becoming mindful.
Oct 09 10:55:21 localhost kernel: SELinux:  Initializing.
Oct 09 10:55:21 localhost kernel: LSM support for eBPF active
Oct 09 10:55:21 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 10:55:21 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 10:55:21 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 09 10:55:21 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 09 10:55:21 localhost kernel: ... version:                0
Oct 09 10:55:21 localhost kernel: ... bit width:              48
Oct 09 10:55:21 localhost kernel: ... generic registers:      6
Oct 09 10:55:21 localhost kernel: ... value mask:             0000ffffffffffff
Oct 09 10:55:21 localhost kernel: ... max period:             00007fffffffffff
Oct 09 10:55:21 localhost kernel: ... fixed-purpose events:   0
Oct 09 10:55:21 localhost kernel: ... event mask:             000000000000003f
Oct 09 10:55:21 localhost kernel: signal: max sigframe size: 1776
Oct 09 10:55:21 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 09 10:55:21 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 09 10:55:21 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 09 10:55:21 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 09 10:55:21 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 09 10:55:21 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 09 10:55:21 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Oct 09 10:55:21 localhost kernel: node 0 deferred pages initialised in 25ms
Oct 09 10:55:21 localhost kernel: Memory: 7765732K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616508K reserved, 0K cma-reserved)
Oct 09 10:55:21 localhost kernel: devtmpfs: initialized
Oct 09 10:55:21 localhost kernel: x86/mm: Memory block size: 128MB
Oct 09 10:55:21 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 09 10:55:21 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 09 10:55:21 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 09 10:55:21 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 09 10:55:21 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 09 10:55:21 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 09 10:55:21 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 09 10:55:21 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 09 10:55:21 localhost kernel: audit: type=2000 audit(1760007318.771:1): state=initialized audit_enabled=0 res=1
Oct 09 10:55:21 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 09 10:55:21 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 09 10:55:21 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 09 10:55:21 localhost kernel: cpuidle: using governor menu
Oct 09 10:55:21 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 09 10:55:21 localhost kernel: PCI: Using configuration type 1 for base access
Oct 09 10:55:21 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 09 10:55:21 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 09 10:55:21 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 09 10:55:21 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 09 10:55:21 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 09 10:55:21 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 09 10:55:21 localhost kernel: Demotion targets for Node 0: null
Oct 09 10:55:21 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 09 10:55:21 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 09 10:55:21 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 09 10:55:21 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 09 10:55:21 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 09 10:55:21 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 09 10:55:21 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 09 10:55:21 localhost kernel: ACPI: Interpreter enabled
Oct 09 10:55:21 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 09 10:55:21 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 09 10:55:21 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 09 10:55:21 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 09 10:55:21 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 09 10:55:21 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 09 10:55:21 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [3] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [4] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [5] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [6] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [7] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [8] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [9] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [10] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [11] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [12] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [13] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [14] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [15] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [16] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [17] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [18] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [19] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [20] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [21] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [22] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [23] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [24] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [25] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [26] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [27] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [28] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [29] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [30] registered
Oct 09 10:55:21 localhost kernel: acpiphp: Slot [31] registered
Oct 09 10:55:21 localhost kernel: PCI host bridge to bus 0000:00
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc180-0xc18f]
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc140-0xc15f]
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 09 10:55:21 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfea80000-0xfeafffff pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 09 10:55:21 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc160-0xc17f]
Oct 09 10:55:21 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 09 10:55:21 localhost kernel: pci 0000:00:07.0: BAR 0 [io  0xc100-0xc13f]
Oct 09 10:55:21 localhost kernel: pci 0000:00:07.0: BAR 1 [mem 0xfeb93000-0xfeb93fff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref]
Oct 09 10:55:21 localhost kernel: pci 0000:00:07.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 09 10:55:21 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 09 10:55:21 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 09 10:55:21 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 09 10:55:21 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 09 10:55:21 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 09 10:55:21 localhost kernel: iommu: Default domain type: Translated
Oct 09 10:55:21 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 09 10:55:21 localhost kernel: SCSI subsystem initialized
Oct 09 10:55:21 localhost kernel: ACPI: bus type USB registered
Oct 09 10:55:21 localhost kernel: usbcore: registered new interface driver usbfs
Oct 09 10:55:21 localhost kernel: usbcore: registered new interface driver hub
Oct 09 10:55:21 localhost kernel: usbcore: registered new device driver usb
Oct 09 10:55:21 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 09 10:55:21 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 09 10:55:21 localhost kernel: PTP clock support registered
Oct 09 10:55:21 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 09 10:55:21 localhost kernel: NetLabel: Initializing
Oct 09 10:55:21 localhost kernel: NetLabel:  domain hash size = 128
Oct 09 10:55:21 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 09 10:55:21 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 09 10:55:21 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 09 10:55:21 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 09 10:55:21 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 09 10:55:21 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 09 10:55:21 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 09 10:55:21 localhost kernel: vgaarb: loaded
Oct 09 10:55:21 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 09 10:55:21 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 09 10:55:21 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 09 10:55:21 localhost kernel: pnp: PnP ACPI init
Oct 09 10:55:21 localhost kernel: pnp 00:03: [dma 2]
Oct 09 10:55:21 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 09 10:55:21 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 09 10:55:21 localhost kernel: NET: Registered PF_INET protocol family
Oct 09 10:55:21 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 09 10:55:21 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 09 10:55:21 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 09 10:55:21 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 09 10:55:21 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 09 10:55:21 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 09 10:55:21 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 09 10:55:21 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 10:55:21 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 10:55:21 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 09 10:55:21 localhost kernel: NET: Registered PF_XDP protocol family
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 09 10:55:21 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 09 10:55:21 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 09 10:55:21 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 09 10:55:21 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 72010 usecs
Oct 09 10:55:21 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 09 10:55:21 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 09 10:55:21 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 09 10:55:21 localhost kernel: ACPI: bus type thunderbolt registered
Oct 09 10:55:21 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 09 10:55:21 localhost kernel: Initialise system trusted keyrings
Oct 09 10:55:21 localhost kernel: Key type blacklist registered
Oct 09 10:55:21 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 09 10:55:21 localhost kernel: zbud: loaded
Oct 09 10:55:21 localhost kernel: integrity: Platform Keyring initialized
Oct 09 10:55:21 localhost kernel: integrity: Machine keyring initialized
Oct 09 10:55:21 localhost kernel: Freeing initrd memory: 86104K
Oct 09 10:55:21 localhost kernel: NET: Registered PF_ALG protocol family
Oct 09 10:55:21 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 09 10:55:21 localhost kernel: Key type asymmetric registered
Oct 09 10:55:21 localhost kernel: Asymmetric key parser 'x509' registered
Oct 09 10:55:21 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 09 10:55:21 localhost kernel: io scheduler mq-deadline registered
Oct 09 10:55:21 localhost kernel: io scheduler kyber registered
Oct 09 10:55:21 localhost kernel: io scheduler bfq registered
Oct 09 10:55:21 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 09 10:55:21 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 09 10:55:21 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 09 10:55:21 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 09 10:55:21 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 09 10:55:21 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 09 10:55:21 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 09 10:55:21 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 09 10:55:21 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 09 10:55:21 localhost kernel: Non-volatile memory driver v1.3
Oct 09 10:55:21 localhost kernel: rdac: device handler registered
Oct 09 10:55:21 localhost kernel: hp_sw: device handler registered
Oct 09 10:55:21 localhost kernel: emc: device handler registered
Oct 09 10:55:21 localhost kernel: alua: device handler registered
Oct 09 10:55:21 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 09 10:55:21 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 09 10:55:21 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 09 10:55:21 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c140
Oct 09 10:55:21 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 09 10:55:21 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 09 10:55:21 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 09 10:55:21 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 09 10:55:21 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 09 10:55:21 localhost kernel: hub 1-0:1.0: USB hub found
Oct 09 10:55:21 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 09 10:55:21 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 09 10:55:21 localhost kernel: usbserial: USB Serial support registered for generic
Oct 09 10:55:21 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 09 10:55:21 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 09 10:55:21 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 09 10:55:21 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 09 10:55:21 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 09 10:55:21 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 09 10:55:21 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 09 10:55:21 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 09 10:55:21 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 09 10:55:21 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-09T10:55:20 UTC (1760007320)
Oct 09 10:55:21 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 09 10:55:21 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 09 10:55:21 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 09 10:55:21 localhost kernel: usbcore: registered new interface driver usbhid
Oct 09 10:55:21 localhost kernel: usbhid: USB HID core driver
Oct 09 10:55:21 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 09 10:55:21 localhost kernel: Initializing XFRM netlink socket
Oct 09 10:55:21 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 09 10:55:21 localhost kernel: Segment Routing with IPv6
Oct 09 10:55:21 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 09 10:55:21 localhost kernel: mpls_gso: MPLS GSO support
Oct 09 10:55:21 localhost kernel: IPI shorthand broadcast: enabled
Oct 09 10:55:21 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 09 10:55:21 localhost kernel: AES CTR mode by8 optimization enabled
Oct 09 10:55:21 localhost kernel: sched_clock: Marking stable (1191002720, 148735010)->(1416096350, -76358620)
Oct 09 10:55:21 localhost kernel: registered taskstats version 1
Oct 09 10:55:21 localhost kernel: Loading compiled-in X.509 certificates
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 09 10:55:21 localhost kernel: Demotion targets for Node 0: null
Oct 09 10:55:21 localhost kernel: page_owner is disabled
Oct 09 10:55:21 localhost kernel: Key type .fscrypt registered
Oct 09 10:55:21 localhost kernel: Key type fscrypt-provisioning registered
Oct 09 10:55:21 localhost kernel: Key type big_key registered
Oct 09 10:55:21 localhost kernel: Key type encrypted registered
Oct 09 10:55:21 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 09 10:55:21 localhost kernel: Loading compiled-in module X.509 certificates
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 10:55:21 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 09 10:55:21 localhost kernel: ima: No architecture policies found
Oct 09 10:55:21 localhost kernel: evm: Initialising EVM extended attributes:
Oct 09 10:55:21 localhost kernel: evm: security.selinux
Oct 09 10:55:21 localhost kernel: evm: security.SMACK64 (disabled)
Oct 09 10:55:21 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 09 10:55:21 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 09 10:55:21 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 09 10:55:21 localhost kernel: evm: security.apparmor (disabled)
Oct 09 10:55:21 localhost kernel: evm: security.ima
Oct 09 10:55:21 localhost kernel: evm: security.capability
Oct 09 10:55:21 localhost kernel: evm: HMAC attrs: 0x1
Oct 09 10:55:21 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 09 10:55:21 localhost kernel: Running certificate verification RSA selftest
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 09 10:55:21 localhost kernel: Running certificate verification ECDSA selftest
Oct 09 10:55:21 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 09 10:55:21 localhost kernel: clk: Disabling unused clocks
Oct 09 10:55:21 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 09 10:55:21 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 09 10:55:21 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 09 10:55:21 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 09 10:55:21 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 09 10:55:21 localhost kernel: Run /init as init process
Oct 09 10:55:21 localhost kernel:   with arguments:
Oct 09 10:55:21 localhost kernel:     /init
Oct 09 10:55:21 localhost kernel:   with environment:
Oct 09 10:55:21 localhost kernel:     HOME=/
Oct 09 10:55:21 localhost kernel:     TERM=linux
Oct 09 10:55:21 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 09 10:55:21 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 10:55:21 localhost systemd[1]: Detected virtualization kvm.
Oct 09 10:55:21 localhost systemd[1]: Detected architecture x86-64.
Oct 09 10:55:21 localhost systemd[1]: Running in initrd.
Oct 09 10:55:21 localhost systemd[1]: No hostname configured, using default hostname.
Oct 09 10:55:21 localhost systemd[1]: Hostname set to <localhost>.
Oct 09 10:55:21 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 09 10:55:21 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 09 10:55:21 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 10:55:21 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 10:55:21 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 09 10:55:21 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 09 10:55:21 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 09 10:55:21 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 09 10:55:21 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 09 10:55:21 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 09 10:55:21 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 09 10:55:21 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 09 10:55:21 localhost systemd[1]: Reached target Local File Systems.
Oct 09 10:55:21 localhost systemd[1]: Reached target Path Units.
Oct 09 10:55:21 localhost systemd[1]: Reached target Slice Units.
Oct 09 10:55:21 localhost systemd[1]: Reached target Swaps.
Oct 09 10:55:21 localhost systemd[1]: Reached target Timer Units.
Oct 09 10:55:21 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 10:55:21 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 09 10:55:21 localhost systemd[1]: Listening on Journal Socket.
Oct 09 10:55:21 localhost systemd[1]: Listening on udev Control Socket.
Oct 09 10:55:21 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 09 10:55:21 localhost systemd[1]: Reached target Socket Units.
Oct 09 10:55:21 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 10:55:21 localhost systemd[1]: Starting Journal Service...
Oct 09 10:55:21 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 09 10:55:21 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 09 10:55:21 localhost systemd[1]: Starting Create System Users...
Oct 09 10:55:21 localhost systemd[1]: Starting Setup Virtual Console...
Oct 09 10:55:21 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 10:55:21 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 09 10:55:21 localhost systemd[1]: Finished Create System Users.
Oct 09 10:55:21 localhost systemd-journald[303]: Journal started
Oct 09 10:55:21 localhost systemd-journald[303]: Runtime Journal (/run/log/journal/70380d25c55740029d517ec111e5480b) is 8.0M, max 153.5M, 145.5M free.
Oct 09 10:55:21 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Oct 09 10:55:21 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Oct 09 10:55:21 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 09 10:55:21 localhost systemd[1]: Started Journal Service.
Oct 09 10:55:21 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 10:55:21 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 10:55:21 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 10:55:21 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 10:55:21 localhost systemd[1]: Finished Setup Virtual Console.
Oct 09 10:55:21 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 09 10:55:21 localhost systemd[1]: Starting dracut cmdline hook...
Oct 09 10:55:21 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Oct 09 10:55:21 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 10:55:21 localhost systemd[1]: Finished dracut cmdline hook.
Oct 09 10:55:21 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 09 10:55:21 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 09 10:55:21 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 09 10:55:21 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 09 10:55:21 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 09 10:55:21 localhost kernel: RPC: Registered udp transport module.
Oct 09 10:55:21 localhost kernel: RPC: Registered tcp transport module.
Oct 09 10:55:21 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 09 10:55:21 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 09 10:55:21 localhost rpc.statd[443]: Version 2.5.4 starting
Oct 09 10:55:21 localhost rpc.statd[443]: Initializing NSM state
Oct 09 10:55:21 localhost rpc.idmapd[448]: Setting log level to 0
Oct 09 10:55:21 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 09 10:55:21 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 10:55:21 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 10:55:21 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 10:55:21 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 09 10:55:21 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 09 10:55:21 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 09 10:55:21 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 09 10:55:21 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 09 10:55:21 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 09 10:55:21 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 10:55:21 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 09 10:55:21 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 10:55:21 localhost systemd[1]: Reached target Network.
Oct 09 10:55:21 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 10:55:21 localhost systemd[1]: Starting dracut initqueue hook...
Oct 09 10:55:21 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 09 10:55:21 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 09 10:55:21 localhost kernel:  vda: vda1
Oct 09 10:55:21 localhost kernel: libata version 3.00 loaded.
Oct 09 10:55:21 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 09 10:55:21 localhost systemd-udevd[484]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:55:21 localhost systemd-udevd[503]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:55:21 localhost kernel: scsi host0: ata_piix
Oct 09 10:55:21 localhost kernel: scsi host1: ata_piix
Oct 09 10:55:21 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc180 irq 14 lpm-pol 0
Oct 09 10:55:21 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc188 irq 15 lpm-pol 0
Oct 09 10:55:21 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 10:55:21 localhost systemd[1]: Reached target Initrd Root Device.
Oct 09 10:55:21 localhost kernel: ata1: found unknown device (class 0)
Oct 09 10:55:21 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 09 10:55:21 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 09 10:55:21 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 09 10:55:21 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 09 10:55:21 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 09 10:55:22 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 09 10:55:22 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 09 10:55:22 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 09 10:55:22 localhost systemd[1]: Reached target System Initialization.
Oct 09 10:55:22 localhost systemd[1]: Reached target Basic System.
Oct 09 10:55:22 localhost systemd[1]: Finished dracut initqueue hook.
Oct 09 10:55:22 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 10:55:22 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 09 10:55:22 localhost systemd[1]: Reached target Remote File Systems.
Oct 09 10:55:22 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 09 10:55:22 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 09 10:55:22 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 09 10:55:22 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Oct 09 10:55:22 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 10:55:22 localhost systemd[1]: Mounting /sysroot...
Oct 09 10:55:22 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 09 10:55:22 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 09 10:55:22 localhost kernel: XFS (vda1): Ending clean mount
Oct 09 10:55:22 localhost systemd[1]: Mounted /sysroot.
Oct 09 10:55:22 localhost systemd[1]: Reached target Initrd Root File System.
Oct 09 10:55:22 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 09 10:55:22 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 09 10:55:22 localhost systemd[1]: Reached target Initrd File Systems.
Oct 09 10:55:22 localhost systemd[1]: Reached target Initrd Default Target.
Oct 09 10:55:22 localhost systemd[1]: Starting dracut mount hook...
Oct 09 10:55:22 localhost systemd[1]: Finished dracut mount hook.
Oct 09 10:55:22 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 09 10:55:22 localhost rpc.idmapd[448]: exiting on signal 15
Oct 09 10:55:22 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 09 10:55:22 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 09 10:55:22 localhost systemd[1]: Stopped target Network.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Timer Units.
Oct 09 10:55:22 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 09 10:55:22 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Basic System.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Path Units.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Remote File Systems.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Slice Units.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Socket Units.
Oct 09 10:55:22 localhost systemd[1]: Stopped target System Initialization.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Local File Systems.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Swaps.
Oct 09 10:55:22 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped dracut mount hook.
Oct 09 10:55:22 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 09 10:55:22 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 09 10:55:22 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 09 10:55:22 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 09 10:55:22 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 09 10:55:22 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 09 10:55:22 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 09 10:55:22 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 09 10:55:22 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 09 10:55:22 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 09 10:55:22 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 09 10:55:22 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 09 10:55:22 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Closed udev Control Socket.
Oct 09 10:55:22 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Closed udev Kernel Socket.
Oct 09 10:55:22 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 09 10:55:22 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 09 10:55:22 localhost systemd[1]: Starting Cleanup udev Database...
Oct 09 10:55:22 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 09 10:55:22 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 09 10:55:22 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Stopped Create System Users.
Oct 09 10:55:22 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 09 10:55:22 localhost systemd[1]: Finished Cleanup udev Database.
Oct 09 10:55:22 localhost systemd[1]: Reached target Switch Root.
Oct 09 10:55:22 localhost systemd[1]: Starting Switch Root...
Oct 09 10:55:22 localhost systemd[1]: Switching root.
Oct 09 10:55:22 localhost systemd-journald[303]: Journal stopped
Oct 09 10:55:23 compute-2 systemd-journald[303]: Received SIGTERM from PID 1 (systemd).
Oct 09 10:55:23 compute-2 kernel: audit: type=1404 audit(1760007323.076:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 09 10:55:23 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 10:55:23 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct 09 10:55:23 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 10:55:23 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct 09 10:55:23 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 10:55:23 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 10:55:23 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 10:55:23 compute-2 kernel: audit: type=1403 audit(1760007323.221:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 09 10:55:23 compute-2 systemd[1]: Successfully loaded SELinux policy in 148.174ms.
Oct 09 10:55:23 compute-2 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.022ms.
Oct 09 10:55:23 compute-2 systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 10:55:23 compute-2 systemd[1]: Detected virtualization kvm.
Oct 09 10:55:23 compute-2 systemd[1]: Detected architecture x86-64.
Oct 09 10:55:23 compute-2 systemd[1]: Hostname set to <compute-2>.
Oct 09 10:55:23 compute-2 systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:55:23 compute-2 systemd-sysv-generator[640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:55:23 compute-2 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 09 10:55:23 compute-2 systemd[1]: Stopped Switch Root.
Oct 09 10:55:23 compute-2 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 09 10:55:23 compute-2 systemd[1]: Created slice Slice /system/getty.
Oct 09 10:55:23 compute-2 systemd[1]: Created slice Slice /system/serial-getty.
Oct 09 10:55:23 compute-2 systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 09 10:55:23 compute-2 systemd[1]: Created slice User and Session Slice.
Oct 09 10:55:23 compute-2 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 10:55:23 compute-2 systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 09 10:55:23 compute-2 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 09 10:55:23 compute-2 systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 10:55:23 compute-2 systemd[1]: Stopped target Switch Root.
Oct 09 10:55:23 compute-2 systemd[1]: Stopped target Initrd File Systems.
Oct 09 10:55:23 compute-2 systemd[1]: Stopped target Initrd Root File System.
Oct 09 10:55:23 compute-2 systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 09 10:55:23 compute-2 systemd[1]: Reached target Path Units.
Oct 09 10:55:23 compute-2 systemd[1]: Reached target rpc_pipefs.target.
Oct 09 10:55:23 compute-2 systemd[1]: Reached target Slice Units.
Oct 09 10:55:23 compute-2 systemd[1]: Reached target Local Verity Protected Volumes.
Oct 09 10:55:23 compute-2 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 09 10:55:23 compute-2 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 09 10:55:23 compute-2 systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 09 10:55:23 compute-2 systemd[1]: Reached target RPC Port Mapper.
Oct 09 10:55:23 compute-2 systemd[1]: Listening on Process Core Dump Socket.
Oct 09 10:55:23 compute-2 systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 09 10:55:23 compute-2 systemd[1]: Listening on udev Control Socket.
Oct 09 10:55:23 compute-2 systemd[1]: Listening on udev Kernel Socket.
Oct 09 10:55:23 compute-2 systemd[1]: Mounting Huge Pages File System...
Oct 09 10:55:23 compute-2 systemd[1]: Mounting /dev/hugepages1G...
Oct 09 10:55:23 compute-2 systemd[1]: Mounting /dev/hugepages2M...
Oct 09 10:55:23 compute-2 systemd[1]: Mounting POSIX Message Queue File System...
Oct 09 10:55:23 compute-2 systemd[1]: Mounting Kernel Debug File System...
Oct 09 10:55:23 compute-2 systemd[1]: Mounting Kernel Trace File System...
Oct 09 10:55:23 compute-2 systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 10:55:23 compute-2 systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 10:55:23 compute-2 systemd[1]: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct 09 10:55:23 compute-2 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 09 10:55:23 compute-2 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 10:55:23 compute-2 systemd[1]: Starting Load Kernel Module drm...
Oct 09 10:55:23 compute-2 systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 09 10:55:23 compute-2 systemd[1]: Starting Load Kernel Module fuse...
Oct 09 10:55:23 compute-2 systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 09 10:55:23 compute-2 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 09 10:55:23 compute-2 systemd[1]: Stopped File System Check on Root Device.
Oct 09 10:55:23 compute-2 systemd[1]: Stopped Journal Service.
Oct 09 10:55:23 compute-2 kernel: fuse: init (API version 7.37)
Oct 09 10:55:23 compute-2 systemd[1]: Starting Journal Service...
Oct 09 10:55:23 compute-2 systemd[1]: Starting Load Kernel Modules...
Oct 09 10:55:23 compute-2 systemd[1]: Starting Generate network units from Kernel command line...
Oct 09 10:55:23 compute-2 systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 10:55:23 compute-2 systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 09 10:55:23 compute-2 systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 09 10:55:23 compute-2 systemd[1]: Starting Coldplug All udev Devices...
Oct 09 10:55:23 compute-2 systemd[1]: Mounted Huge Pages File System.
Oct 09 10:55:23 compute-2 kernel: ACPI: bus type drm_connector registered
Oct 09 10:55:23 compute-2 systemd[1]: Mounted /dev/hugepages1G.
Oct 09 10:55:23 compute-2 systemd[1]: Mounted /dev/hugepages2M.
Oct 09 10:55:23 compute-2 systemd[1]: Mounted POSIX Message Queue File System.
Oct 09 10:55:23 compute-2 systemd[1]: Mounted Kernel Debug File System.
Oct 09 10:55:23 compute-2 systemd-journald[686]: Journal started
Oct 09 10:55:23 compute-2 systemd-journald[686]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct 09 10:55:23 compute-2 systemd[1]: Queued start job for default target Multi-User System.
Oct 09 10:55:23 compute-2 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 09 10:55:23 compute-2 systemd[1]: Started Journal Service.
Oct 09 10:55:23 compute-2 systemd[1]: Mounted Kernel Trace File System.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 10:55:23 compute-2 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 10:55:23 compute-2 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Load Kernel Module drm.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 09 10:55:23 compute-2 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 09 10:55:23 compute-2 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Load Kernel Module fuse.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Generate network units from Kernel command line.
Oct 09 10:55:23 compute-2 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 09 10:55:23 compute-2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 09 10:55:23 compute-2 kernel: Bridge firewalling registered
Oct 09 10:55:23 compute-2 systemd-modules-load[687]: Inserted module 'br_netfilter'
Oct 09 10:55:23 compute-2 systemd[1]: Mounting FUSE Control File System...
Oct 09 10:55:23 compute-2 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 09 10:55:23 compute-2 systemd[1]: Mounted FUSE Control File System.
Oct 09 10:55:23 compute-2 systemd[1]: Activating swap /swap...
Oct 09 10:55:23 compute-2 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 10:55:23 compute-2 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct 09 10:55:23 compute-2 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 09 10:55:23 compute-2 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 09 10:55:23 compute-2 systemd-modules-load[687]: Inserted module 'nf_conntrack'
Oct 09 10:55:23 compute-2 systemd[1]: Starting Load/Save OS Random Seed...
Oct 09 10:55:23 compute-2 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 09 10:55:23 compute-2 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct 09 10:55:23 compute-2 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 10:55:23 compute-2 systemd[1]: Activated swap /swap.
Oct 09 10:55:23 compute-2 systemd-journald[686]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 9.186ms for 777 entries.
Oct 09 10:55:23 compute-2 systemd-journald[686]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct 09 10:55:23 compute-2 systemd-journald[686]: Received client request to flush runtime journal.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Load Kernel Modules.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Load/Save OS Random Seed.
Oct 09 10:55:23 compute-2 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 10:55:23 compute-2 systemd[1]: Reached target Swaps.
Oct 09 10:55:23 compute-2 systemd[1]: Starting Apply Kernel Variables...
Oct 09 10:55:23 compute-2 systemd[1]: Finished Coldplug All udev Devices.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Apply Kernel Variables.
Oct 09 10:55:23 compute-2 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 09 10:55:24 compute-2 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 10:55:24 compute-2 systemd[1]: Reached target Preparation for Local File Systems.
Oct 09 10:55:24 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 09 10:55:24 compute-2 systemd[1]: Reached target Local File Systems.
Oct 09 10:55:24 compute-2 systemd[1]: Starting Import network configuration from initramfs...
Oct 09 10:55:24 compute-2 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct 09 10:55:24 compute-2 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 09 10:55:24 compute-2 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 09 10:55:24 compute-2 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 09 10:55:24 compute-2 systemd[1]: Starting Automatic Boot Loader Update...
Oct 09 10:55:24 compute-2 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 09 10:55:24 compute-2 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 10:55:24 compute-2 bootctl[707]: Couldn't find EFI system partition, skipping.
Oct 09 10:55:24 compute-2 systemd[1]: Finished Automatic Boot Loader Update.
Oct 09 10:55:24 compute-2 systemd[1]: Finished Import network configuration from initramfs.
Oct 09 10:55:24 compute-2 systemd-udevd[709]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 10:55:24 compute-2 systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 10:55:24 compute-2 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 10:55:24 compute-2 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 10:55:24 compute-2 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 10:55:24 compute-2 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 10:55:24 compute-2 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 09 10:55:24 compute-2 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:55:24 compute-2 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 09 10:55:24 compute-2 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 09 10:55:24 compute-2 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 09 10:55:24 compute-2 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 09 10:55:24 compute-2 systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 10:55:24 compute-2 systemd[1]: Starting Security Auditing Service...
Oct 09 10:55:24 compute-2 systemd[1]: Starting RPC Bind...
Oct 09 10:55:24 compute-2 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct 09 10:55:24 compute-2 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct 09 10:55:24 compute-2 auditd[776]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 09 10:55:24 compute-2 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 09 10:55:24 compute-2 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 09 10:55:24 compute-2 auditd[776]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 09 10:55:24 compute-2 kernel: Console: switching to colour dummy device 80x25
Oct 09 10:55:24 compute-2 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 09 10:55:24 compute-2 kernel: [drm] features: -context_init
Oct 09 10:55:24 compute-2 kernel: [drm] number of scanouts: 1
Oct 09 10:55:24 compute-2 kernel: [drm] number of cap sets: 0
Oct 09 10:55:24 compute-2 systemd[1]: Started RPC Bind.
Oct 09 10:55:24 compute-2 systemd-udevd[732]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:55:24 compute-2 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 09 10:55:24 compute-2 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 09 10:55:24 compute-2 kernel: Console: switching to colour frame buffer device 128x48
Oct 09 10:55:24 compute-2 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 09 10:55:24 compute-2 augenrules[781]: /sbin/augenrules: No change
Oct 09 10:55:24 compute-2 augenrules[806]: No rules
Oct 09 10:55:24 compute-2 augenrules[806]: enabled 1
Oct 09 10:55:24 compute-2 augenrules[806]: failure 1
Oct 09 10:55:24 compute-2 augenrules[806]: pid 776
Oct 09 10:55:24 compute-2 augenrules[806]: rate_limit 0
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_limit 8192
Oct 09 10:55:24 compute-2 augenrules[806]: lost 0
Oct 09 10:55:24 compute-2 augenrules[806]: backlog 3
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_wait_time 60000
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_wait_time_actual 0
Oct 09 10:55:24 compute-2 augenrules[806]: enabled 1
Oct 09 10:55:24 compute-2 augenrules[806]: failure 1
Oct 09 10:55:24 compute-2 augenrules[806]: pid 776
Oct 09 10:55:24 compute-2 augenrules[806]: rate_limit 0
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_limit 8192
Oct 09 10:55:24 compute-2 augenrules[806]: lost 0
Oct 09 10:55:24 compute-2 augenrules[806]: backlog 4
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_wait_time 60000
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_wait_time_actual 0
Oct 09 10:55:24 compute-2 augenrules[806]: enabled 1
Oct 09 10:55:24 compute-2 augenrules[806]: failure 1
Oct 09 10:55:24 compute-2 augenrules[806]: pid 776
Oct 09 10:55:24 compute-2 augenrules[806]: rate_limit 0
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_limit 8192
Oct 09 10:55:24 compute-2 augenrules[806]: lost 0
Oct 09 10:55:24 compute-2 augenrules[806]: backlog 8
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_wait_time 60000
Oct 09 10:55:24 compute-2 augenrules[806]: backlog_wait_time_actual 0
Oct 09 10:55:24 compute-2 systemd[1]: Started Security Auditing Service.
Oct 09 10:55:24 compute-2 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 09 10:55:24 compute-2 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 09 10:55:24 compute-2 kernel: kvm_amd: TSC scaling supported
Oct 09 10:55:24 compute-2 kernel: kvm_amd: Nested Virtualization enabled
Oct 09 10:55:24 compute-2 kernel: kvm_amd: Nested Paging enabled
Oct 09 10:55:24 compute-2 kernel: kvm_amd: LBR virtualization supported
Oct 09 10:55:24 compute-2 systemd[1]: Reached target System Initialization.
Oct 09 10:55:24 compute-2 systemd[1]: Started dnf makecache --timer.
Oct 09 10:55:24 compute-2 systemd[1]: Started Daily rotation of log files.
Oct 09 10:55:24 compute-2 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct 09 10:55:24 compute-2 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct 09 10:55:24 compute-2 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 09 10:55:24 compute-2 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 09 10:55:24 compute-2 systemd[1]: Reached target Timer Units.
Oct 09 10:55:24 compute-2 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 10:55:24 compute-2 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 09 10:55:24 compute-2 systemd[1]: Reached target Socket Units.
Oct 09 10:55:24 compute-2 systemd[1]: Starting D-Bus System Message Bus...
Oct 09 10:55:24 compute-2 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 10:55:24 compute-2 systemd[1]: Started D-Bus System Message Bus.
Oct 09 10:55:24 compute-2 systemd[1]: Reached target Basic System.
Oct 09 10:55:24 compute-2 dbus-broker-lau[835]: Ready
Oct 09 10:55:24 compute-2 systemd[1]: Starting NTP client/server...
Oct 09 10:55:24 compute-2 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 09 10:55:24 compute-2 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 09 10:55:24 compute-2 systemd[1]: Started irqbalance daemon.
Oct 09 10:55:24 compute-2 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 09 10:55:24 compute-2 systemd[1]: Starting Create netns directory...
Oct 09 10:55:24 compute-2 systemd[1]: Starting Netfilter Tables...
Oct 09 10:55:24 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 10:55:24 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 10:55:24 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 10:55:24 compute-2 systemd[1]: Reached target sshd-keygen.target.
Oct 09 10:55:24 compute-2 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 09 10:55:24 compute-2 systemd[1]: Reached target User and Group Name Lookups.
Oct 09 10:55:24 compute-2 systemd[1]: Starting Resets System Activity Logs...
Oct 09 10:55:24 compute-2 systemd[1]: Starting User Login Management...
Oct 09 10:55:24 compute-2 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 09 10:55:24 compute-2 systemd[1]: Finished Resets System Activity Logs.
Oct 09 10:55:25 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 10:55:25 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 10:55:25 compute-2 systemd[1]: Finished Create netns directory.
Oct 09 10:55:25 compute-2 chronyd[850]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 09 10:55:25 compute-2 systemd-logind[844]: New seat seat0.
Oct 09 10:55:25 compute-2 chronyd[850]: Frequency -29.005 +/- 0.099 ppm read from /var/lib/chrony/drift
Oct 09 10:55:25 compute-2 systemd-logind[844]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 09 10:55:25 compute-2 chronyd[850]: Loaded seccomp filter (level 2)
Oct 09 10:55:25 compute-2 systemd-logind[844]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 09 10:55:25 compute-2 systemd[1]: Started User Login Management.
Oct 09 10:55:25 compute-2 systemd[1]: Started NTP client/server.
Oct 09 10:55:25 compute-2 systemd[1]: Finished Netfilter Tables.
Oct 09 10:55:25 compute-2 cloud-init[870]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 10:55:25 +0000. Up 6.24 seconds.
Oct 09 10:55:25 compute-2 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 09 10:55:25 compute-2 systemd[1]: Reached target Preparation for Network.
Oct 09 10:55:25 compute-2 systemd[1]: Starting Open vSwitch Database Unit...
Oct 09 10:55:25 compute-2 chown[872]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 09 10:55:25 compute-2 ovs-ctl[877]: Starting ovsdb-server [  OK  ]
Oct 09 10:55:25 compute-2 ovs-vsctl[926]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 09 10:55:26 compute-2 ovs-vsctl[936]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"03e3fad8-8a9f-499e-9d6c-148720b92652\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 09 10:55:26 compute-2 ovs-ctl[877]: Configuring Open vSwitch system IDs [  OK  ]
Oct 09 10:55:26 compute-2 ovs-vsctl[942]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct 09 10:55:26 compute-2 ovs-ctl[877]: Enabling remote OVSDB managers [  OK  ]
Oct 09 10:55:26 compute-2 systemd[1]: Started Open vSwitch Database Unit.
Oct 09 10:55:26 compute-2 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 09 10:55:26 compute-2 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 09 10:55:26 compute-2 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 09 10:55:26 compute-2 kernel: openvswitch: Open vSwitch switching datapath
Oct 09 10:55:26 compute-2 ovs-ctl[986]: Inserting openvswitch module [  OK  ]
Oct 09 10:55:26 compute-2 kernel: ovs-system: entered promiscuous mode
Oct 09 10:55:26 compute-2 kernel: Timeout policy base is empty
Oct 09 10:55:26 compute-2 systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:55:26 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 10:55:26 compute-2 kernel: vlan22: entered promiscuous mode
Oct 09 10:55:26 compute-2 kernel: vlan20: entered promiscuous mode
Oct 09 10:55:26 compute-2 systemd-udevd[749]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 10:55:26 compute-2 kernel: vlan23: entered promiscuous mode
Oct 09 10:55:26 compute-2 kernel: vlan21: entered promiscuous mode
Oct 09 10:55:26 compute-2 ovs-ctl[955]: Starting ovs-vswitchd [  OK  ]
Oct 09 10:55:26 compute-2 ovs-vsctl[1030]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct 09 10:55:26 compute-2 ovs-ctl[955]: Enabling remote OVSDB managers [  OK  ]
Oct 09 10:55:26 compute-2 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 09 10:55:26 compute-2 systemd[1]: Starting Open vSwitch...
Oct 09 10:55:26 compute-2 systemd[1]: Finished Open vSwitch.
Oct 09 10:55:26 compute-2 systemd[1]: Starting Network Manager...
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.5923] NetworkManager (version 1.54.1-1.el9) is starting... (boot:5de52937-6f17-4685-b8e5-35f2b47f7aa5)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.5928] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.6048] manager[0x556ff4c87040]: monitoring kernel firmware directory '/lib/firmware'.
Oct 09 10:55:26 compute-2 systemd[1]: Starting Hostname Service...
Oct 09 10:55:26 compute-2 systemd[1]: Started Hostname Service.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.6860] hostname: hostname: using hostnamed
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.6860] hostname: static hostname changed from (none) to "compute-2"
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.6865] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.6970] manager[0x556ff4c87040]: rfkill: Wi-Fi hardware radio set enabled
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.6971] manager[0x556ff4c87040]: rfkill: WWAN hardware radio set enabled
Oct 09 10:55:26 compute-2 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7042] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7067] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7068] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7068] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7069] manager: Networking is enabled by state file
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7075] settings: Loaded settings plugin: keyfile (internal)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7108] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 09 10:55:26 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7222] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7250] dhcp: init: Using DHCP client 'internal'
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7253] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7265] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7277] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7285] device (lo): Activation: starting connection 'lo' (e4a1de03-5afe-4633-bf7d-5d9a4a010dc2)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7295] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7299] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7323] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7326] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7343] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7346] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7360] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7363] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7380] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7384] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7402] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7406] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7421] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7424] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7431] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/9)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7434] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7442] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7445] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7451] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7454] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7460] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7464] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7474] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/13)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7476] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7483] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7485] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7492] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7495] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 systemd[1]: Started Network Manager.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7505] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7513] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7516] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 09 10:55:26 compute-2 systemd[1]: Reached target Network.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7518] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7521] device (eth0): carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7523] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7525] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7527] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7529] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7531] device (eth1): carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7538] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7545] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 10:55:26 compute-2 kernel: vlan20: left promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7568] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7573] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7578] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7583] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7588] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7593] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7595] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7598] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7600] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7603] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7605] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7608] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7616] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7619] policy: auto-activating connection 'ci-private-network' (3bd3b27c-a8da-5d9a-8c0f-1b52a66f9557)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7620] policy: auto-activating connection 'vlan22-port' (04a4c572-140e-47e6-849c-b1a29eadc7c1)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7622] policy: auto-activating connection 'br-ex-br' (04f5a73b-b235-467c-a270-0a02a4ea4079)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7624] policy: auto-activating connection 'eth1-port' (0bc00282-17c4-4cc0-b7a9-d1ce634ea549)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7625] policy: auto-activating connection 'vlan23-port' (4061242d-8b67-4224-b1bb-ca63c3480e60)
Oct 09 10:55:26 compute-2 systemd[1]: Starting Network Manager Wait Online...
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7629] policy: auto-activating connection 'vlan20-port' (79c27cf4-3f55-4392-acd0-8e03025cba06)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7630] policy: auto-activating connection 'vlan21-port' (e1e5368f-e34c-48da-aaf3-3fe0f1f4f137)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7632] policy: auto-activating connection 'br-ex-port' (f73a5a02-778a-4472-925d-b4b8361f5466)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7635] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7640] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7644] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7646] device (eth1): Activation: starting connection 'ci-private-network' (3bd3b27c-a8da-5d9a-8c0f-1b52a66f9557)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7649] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (04a4c572-140e-47e6-849c-b1a29eadc7c1)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7656] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (04f5a73b-b235-467c-a270-0a02a4ea4079)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7658] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0bc00282-17c4-4cc0-b7a9-d1ce634ea549)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7661] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4061242d-8b67-4224-b1bb-ca63c3480e60)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7665] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (79c27cf4-3f55-4392-acd0-8e03025cba06)
Oct 09 10:55:26 compute-2 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7668] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (e1e5368f-e34c-48da-aaf3-3fe0f1f4f137)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7671] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f73a5a02-778a-4472-925d-b4b8361f5466)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7673] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 09 10:55:26 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 09 10:55:26 compute-2 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct 09 10:55:26 compute-2 kernel: vlan21: left promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7756] device (lo): Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7775] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7778] manager: NetworkManager state is now CONNECTING
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7780] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7788] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7791] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7795] device (br-ex)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7802] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7803] device (eth1)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7809] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7810] device (vlan20)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7816] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7817] device (vlan21)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7825] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7826] device (vlan22)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7834] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7835] device (vlan23)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7841] device (vlan23)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7842] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7845] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7848] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7850] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7861] device (eth1): disconnecting for new activation request.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7864] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:55:26 compute-2 kernel: vlan23: left promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7937] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7944] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7954] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7960] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7965] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f73a5a02-778a-4472-925d-b4b8361f5466)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7969] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7973] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (0bc00282-17c4-4cc0-b7a9-d1ce634ea549)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7975] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7979] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (79c27cf4-3f55-4392-acd0-8e03025cba06)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7982] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7986] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (e1e5368f-e34c-48da-aaf3-3fe0f1f4f137)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7989] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7992] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (04a4c572-140e-47e6-849c-b1a29eadc7c1)
Oct 09 10:55:26 compute-2 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 10:55:26 compute-2 systemd[1]: Reached target NFS client services.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.7996] device (vlan23)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8002] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (4061242d-8b67-4224-b1bb-ca63c3480e60)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8004] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 10:55:26 compute-2 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8020] device (eth1): Activation: starting connection 'ci-private-network' (3bd3b27c-a8da-5d9a-8c0f-1b52a66f9557)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8025] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:55:26 compute-2 systemd[1]: Reached target Remote File Systems.
Oct 09 10:55:26 compute-2 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 10:55:26 compute-2 kernel: vlan22: left promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8081] dhcp4 (eth0): state changed new lease, address=38.102.83.219
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8100] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8136] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8144] policy: auto-activating connection 'vlan20-if' (1166c151-0977-4973-94db-a3feb25b4b56)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8146] policy: auto-activating connection 'vlan22-if' (e32dccd0-d86c-4369-94c8-68d05fd47ceb)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8148] policy: auto-activating connection 'vlan23-if' (627d8009-2d86-4f8e-a2e2-52969b161008)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8156] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8160] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8163] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8164] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8166] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8170] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8173] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8175] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8177] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8180] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8183] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8184] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8186] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8190] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8195] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8196] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8198] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8202] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8204] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8206] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8208] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8212] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8214] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8216] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8219] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 kernel: ovs-system: left promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8224] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8239] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8245] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8251] policy: auto-activating connection 'vlan21-if' (cfee8c9e-1c86-4780-8a5c-a8a347ec6af2)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8258] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8262] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8267] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (1166c151-0977-4973-94db-a3feb25b4b56)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8269] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8273] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8278] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (627d8009-2d86-4f8e-a2e2-52969b161008)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8279] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8285] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8290] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8295] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8299] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8306] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8311] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8322] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8328] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 kernel: ovs-system: entered promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8335] policy: auto-activating connection 'vlan22-if' (e32dccd0-d86c-4369-94c8-68d05fd47ceb)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8339] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8343] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8346] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (cfee8c9e-1c86-4780-8a5c-a8a347ec6af2)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8347] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8348] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 kernel: No such timeout policy "ovs_test_tp"
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8351] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8354] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8357] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8362] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8367] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8370] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8374] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8380] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8386] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8390] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8394] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e32dccd0-d86c-4369-94c8-68d05fd47ceb)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8395] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8398] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8399] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8404] policy: auto-activating connection 'br-ex-if' (e7d25b96-7751-41b0-86b4-dac97aaa5649)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8406] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8412] device (eth0): Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8417] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8420] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8422] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8428] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8431] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8433] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8441] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e7d25b96-7751-41b0-86b4-dac97aaa5649)
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8441] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8445] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8450] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8452] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8453] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8455] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8456] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8457] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8459] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8462] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8474] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8476] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8480] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8484] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8488] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 kernel: vlan20: entered promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8491] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8495] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8499] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8502] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8506] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8509] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8513] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8517] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8523] device (eth1): Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8531] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8539] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8544] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8583] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8595] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 kernel: vlan21: entered promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8616] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8618] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8625] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 kernel: vlan23: entered promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8709] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8722] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8742] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8748] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8755] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8822] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8831] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8849] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8850] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 kernel: br-ex: entered promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8870] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 kernel: vlan22: entered promiscuous mode
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.8988] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9003] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9027] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9028] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9044] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9058] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9071] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9109] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9112] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9126] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 10:55:26 compute-2 NetworkManager[1033]: <info>  [1760007326.9133] manager: startup complete
Oct 09 10:55:26 compute-2 systemd[1]: Finished Network Manager Wait Online.
Oct 09 10:55:26 compute-2 systemd[1]: Starting Cloud-init: Network Stage...
Oct 09 10:55:27 compute-2 systemd[1]: Starting Authorization Manager...
Oct 09 10:55:27 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 09 10:55:27 compute-2 polkitd[1201]: Started polkitd version 0.117
Oct 09 10:55:27 compute-2 polkitd[1201]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 10:55:27 compute-2 polkitd[1201]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 10:55:27 compute-2 polkitd[1201]: Finished loading, compiling and executing 3 rules
Oct 09 10:55:27 compute-2 systemd[1]: Started Authorization Manager.
Oct 09 10:55:27 compute-2 polkitd[1201]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 09 10:55:27 compute-2 cloud-init[1278]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 10:55:27 +0000. Up 7.92 seconds.
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   br-ex    |  True | 192.168.122.102 | 255.255.255.0 | global | fa:16:3e:a6:a2:38 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |    eth0    |  True |  38.102.83.219  | 255.255.255.0 | global | fa:16:3e:7f:54:c6 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:a6:a2:38 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: | ovs-system | False |        .        |       .       |   .    | fa:d6:28:ea:62:27 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   vlan20   |  True |   172.17.0.100  | 255.255.255.0 | global | 8e:5a:43:23:b8:66 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   vlan21   |  True |   172.18.0.100  | 255.255.255.0 | global | a2:0e:48:46:7f:e1 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   vlan22   |  True |   172.19.0.100  | 255.255.255.0 | global | 02:d7:8a:7c:68:e3 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   vlan23   |  True |   172.20.0.100  | 255.255.255.0 | global | 32:7f:24:4b:42:d0 |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   3   |    172.17.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   4   |    172.18.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   5   |    172.19.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   6   |    172.20.0.0   |    0.0.0.0    |  255.255.255.0  |   vlan23  |   U   |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   7   |  192.168.122.0  |    0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct 09 10:55:27 compute-2 cloud-init[1278]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 10:55:27 compute-2 systemd[1]: Finished Cloud-init: Network Stage.
Oct 09 10:55:27 compute-2 systemd[1]: Reached target Cloud-config availability.
Oct 09 10:55:27 compute-2 systemd[1]: Reached target Network is Online.
Oct 09 10:55:27 compute-2 systemd[1]: Starting Cloud-init: Config Stage...
Oct 09 10:55:27 compute-2 systemd[1]: Starting EDPM Container Shutdown...
Oct 09 10:55:27 compute-2 systemd[1]: Starting Notify NFS peers of a restart...
Oct 09 10:55:27 compute-2 systemd[1]: Starting System Logging Service...
Oct 09 10:55:27 compute-2 systemd[1]: Starting OpenSSH server daemon...
Oct 09 10:55:27 compute-2 sm-notify[1311]: Version 2.5.4 starting
Oct 09 10:55:27 compute-2 systemd[1]: Starting Permit User Sessions...
Oct 09 10:55:27 compute-2 systemd[1]: Finished EDPM Container Shutdown.
Oct 09 10:55:27 compute-2 systemd[1]: Started Notify NFS peers of a restart.
Oct 09 10:55:27 compute-2 systemd[1]: Finished Permit User Sessions.
Oct 09 10:55:27 compute-2 systemd[1]: Started Command Scheduler.
Oct 09 10:55:27 compute-2 sshd[1313]: Server listening on 0.0.0.0 port 22.
Oct 09 10:55:27 compute-2 sshd[1313]: Server listening on :: port 22.
Oct 09 10:55:27 compute-2 systemd[1]: Started Getty on tty1.
Oct 09 10:55:27 compute-2 systemd[1]: Started Serial Getty on ttyS0.
Oct 09 10:55:27 compute-2 crond[1315]: (CRON) STARTUP (1.5.7)
Oct 09 10:55:27 compute-2 crond[1315]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 09 10:55:27 compute-2 crond[1315]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 47% if used.)
Oct 09 10:55:27 compute-2 crond[1315]: (CRON) INFO (running with inotify support)
Oct 09 10:55:27 compute-2 systemd[1]: Reached target Login Prompts.
Oct 09 10:55:27 compute-2 systemd[1]: Started OpenSSH server daemon.
Oct 09 10:55:27 compute-2 rsyslogd[1312]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1312" x-info="https://www.rsyslog.com"] start
Oct 09 10:55:27 compute-2 systemd[1]: Started System Logging Service.
Oct 09 10:55:27 compute-2 systemd[1]: Reached target Multi-User System.
Oct 09 10:55:27 compute-2 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 09 10:55:27 compute-2 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 09 10:55:27 compute-2 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 09 10:55:27 compute-2 rsyslogd[1312]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 10:55:27 compute-2 cloud-init[1324]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 10:55:27 +0000. Up 8.47 seconds.
Oct 09 10:55:27 compute-2 systemd[1]: Finished Cloud-init: Config Stage.
Oct 09 10:55:27 compute-2 systemd[1]: Starting Cloud-init: Final Stage...
Oct 09 10:55:28 compute-2 cloud-init[1328]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 10:55:28 +0000. Up 8.86 seconds.
Oct 09 10:55:28 compute-2 cloud-init[1328]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 10:55:28 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 8.93 seconds
Oct 09 10:55:28 compute-2 systemd[1]: Finished Cloud-init: Final Stage.
Oct 09 10:55:28 compute-2 systemd[1]: Reached target Cloud-init target.
Oct 09 10:55:28 compute-2 systemd[1]: Startup finished in 1.505s (kernel) + 2.214s (initrd) + 5.268s (userspace) = 8.989s.
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 25 affinity is now unmanaged
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 31 affinity is now unmanaged
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 28 affinity is now unmanaged
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 26 affinity is now unmanaged
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 32 affinity is now unmanaged
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 30 affinity is now unmanaged
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 29 affinity is now unmanaged
Oct 09 10:55:35 compute-2 irqbalance[840]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 09 10:55:35 compute-2 irqbalance[840]: IRQ 27 affinity is now unmanaged
Oct 09 10:55:37 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 09 10:55:56 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 09 10:56:14 compute-2 sshd-session[1334]: Accepted publickey for zuul from 192.168.122.30 port 55972 ssh2: ECDSA SHA256:RRIwAVyoA3iw56JIY0LmsrTgy+NWFNam8Udacp+6pQ4
Oct 09 10:56:14 compute-2 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 10:56:14 compute-2 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 10:56:14 compute-2 systemd-logind[844]: New session 1 of user zuul.
Oct 09 10:56:14 compute-2 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 10:56:14 compute-2 systemd[1]: Starting User Manager for UID 1000...
Oct 09 10:56:14 compute-2 systemd[1338]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:56:14 compute-2 systemd[1338]: Queued start job for default target Main User Target.
Oct 09 10:56:14 compute-2 rsyslogd[1312]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 10:56:14 compute-2 systemd[1338]: Created slice User Application Slice.
Oct 09 10:56:14 compute-2 systemd[1338]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:56:14 compute-2 systemd[1338]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 10:56:14 compute-2 systemd[1338]: Reached target Paths.
Oct 09 10:56:14 compute-2 systemd[1338]: Reached target Timers.
Oct 09 10:56:14 compute-2 systemd[1338]: Starting D-Bus User Message Bus Socket...
Oct 09 10:56:14 compute-2 systemd[1338]: Starting Create User's Volatile Files and Directories...
Oct 09 10:56:14 compute-2 systemd[1338]: Listening on D-Bus User Message Bus Socket.
Oct 09 10:56:14 compute-2 systemd[1338]: Reached target Sockets.
Oct 09 10:56:14 compute-2 systemd[1338]: Finished Create User's Volatile Files and Directories.
Oct 09 10:56:14 compute-2 systemd[1338]: Reached target Basic System.
Oct 09 10:56:14 compute-2 systemd[1338]: Reached target Main User Target.
Oct 09 10:56:14 compute-2 systemd[1338]: Startup finished in 144ms.
Oct 09 10:56:14 compute-2 systemd[1]: Started User Manager for UID 1000.
Oct 09 10:56:14 compute-2 systemd[1]: Started Session 1 of User zuul.
Oct 09 10:56:14 compute-2 sshd-session[1334]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:56:14 compute-2 sudo[1381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjrpvllypooqkzuxtzujakufvyrgmugt ; cat /proc/sys/kernel/random/boot_id'
Oct 09 10:56:14 compute-2 sudo[1381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:14 compute-2 sudo[1381]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:14 compute-2 sudo[1410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blaujcfzlrvjejgijzbcsrqmlrleriok ; whoami'
Oct 09 10:56:14 compute-2 sudo[1410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:14 compute-2 sudo[1410]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:15 compute-2 sudo[1562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fztbcynpmpeabcrbqgutzuwhmtkyvoah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760007374.127834-338-269957383953601/AnsiballZ_file.py'
Oct 09 10:56:15 compute-2 sudo[1562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:15 compute-2 python3.9[1564]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 10:56:15 compute-2 sudo[1562]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:15 compute-2 sshd-session[1354]: Connection closed by 192.168.122.30 port 55972
Oct 09 10:56:15 compute-2 sshd-session[1334]: pam_unix(sshd:session): session closed for user zuul
Oct 09 10:56:15 compute-2 systemd[1]: session-1.scope: Deactivated successfully.
Oct 09 10:56:15 compute-2 systemd-logind[844]: Session 1 logged out. Waiting for processes to exit.
Oct 09 10:56:15 compute-2 systemd-logind[844]: Removed session 1.
Oct 09 10:56:24 compute-2 sshd-session[1589]: Accepted publickey for zuul from 38.102.83.130 port 37754 ssh2: RSA SHA256:NRv4k9T2ETXKd58t0nJgaBV305UrbGtfoWqthtau3ZU
Oct 09 10:56:24 compute-2 systemd-logind[844]: New session 3 of user zuul.
Oct 09 10:56:24 compute-2 systemd[1]: Started Session 3 of User zuul.
Oct 09 10:56:24 compute-2 sshd-session[1589]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:56:24 compute-2 sudo[1665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isqyylmczgjgcdomiighnqocqesjcjin ; /usr/bin/python3'
Oct 09 10:56:24 compute-2 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:24 compute-2 useradd[1669]: new group: name=ceph-admin, GID=42478
Oct 09 10:56:24 compute-2 useradd[1669]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 09 10:56:24 compute-2 sudo[1665]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:26 compute-2 sudo[1751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thsfrhodhzeqtbvariufenclsueiyzlo ; /usr/bin/python3'
Oct 09 10:56:26 compute-2 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:26 compute-2 sudo[1751]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:26 compute-2 sudo[1824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifvkrjuwiulmrjvfuqrkkwujfwpdpxip ; /usr/bin/python3'
Oct 09 10:56:26 compute-2 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:26 compute-2 sudo[1824]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:27 compute-2 sudo[1874]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uklcktkxuryejtzvlkkbbhyndwixflci ; /usr/bin/python3'
Oct 09 10:56:27 compute-2 sudo[1874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:27 compute-2 sudo[1874]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:27 compute-2 sudo[1900]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvycwljarwpjrqooxryxyqukvhehqmou ; /usr/bin/python3'
Oct 09 10:56:27 compute-2 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:27 compute-2 sudo[1900]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:27 compute-2 sudo[1926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozocgqlyohuisgtvktnqmobtpimiqgru ; /usr/bin/python3'
Oct 09 10:56:27 compute-2 sudo[1926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:28 compute-2 sudo[1926]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:28 compute-2 sudo[1952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owuxfnxyzoxhldspaasczqlbqghduppb ; /usr/bin/python3'
Oct 09 10:56:28 compute-2 sudo[1952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:28 compute-2 sudo[1952]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:29 compute-2 sudo[2030]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bikdvzdqlmawtlyobabjkgntncobvdoy ; /usr/bin/python3'
Oct 09 10:56:29 compute-2 sudo[2030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:29 compute-2 sudo[2030]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:29 compute-2 sudo[2103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrmygppnmpouimuaxcvpuyusfryhbece ; /usr/bin/python3'
Oct 09 10:56:29 compute-2 sudo[2103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:29 compute-2 sudo[2103]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:29 compute-2 sudo[2205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euhvzocqxoygqsravuugrwdqyuoadmyt ; /usr/bin/python3'
Oct 09 10:56:29 compute-2 sudo[2205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:30 compute-2 sudo[2205]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:30 compute-2 sudo[2278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyphcjordhspzffwdxvzqvjwwgmwvidp ; /usr/bin/python3'
Oct 09 10:56:30 compute-2 sudo[2278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:30 compute-2 sudo[2278]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:30 compute-2 sudo[2328]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghebxuyrbcsllfduyyykiizkjmwzaolo ; /usr/bin/python3'
Oct 09 10:56:30 compute-2 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:31 compute-2 python3[2330]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 10:56:32 compute-2 sudo[2328]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:32 compute-2 sudo[2423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vagvogvfdwtyawbqzsbsxwepbhowrfir ; /usr/bin/python3'
Oct 09 10:56:32 compute-2 sudo[2423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:33 compute-2 python3[2425]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 09 10:56:34 compute-2 sudo[2423]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:35 compute-2 sudo[2450]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kalfuvquhzuzeqgflipykmrkhajzzpox ; /usr/bin/python3'
Oct 09 10:56:35 compute-2 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:35 compute-2 python3[2452]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 10:56:35 compute-2 sudo[2450]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:35 compute-2 sudo[2476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmykhnkpoeafgfafbkfoikezxnhxmwop ; /usr/bin/python3'
Oct 09 10:56:35 compute-2 sudo[2476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:35 compute-2 python3[2478]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                         losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 10:56:35 compute-2 kernel: loop: module loaded
Oct 09 10:56:35 compute-2 kernel: loop3: detected capacity change from 0 to 41943040
Oct 09 10:56:35 compute-2 sudo[2476]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:36 compute-2 sudo[2511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxtesrzfobquogjeupyrxxkuevxytiyc ; /usr/bin/python3'
Oct 09 10:56:36 compute-2 sudo[2511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:36 compute-2 python3[2513]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                         vgcreate ceph_vg0 /dev/loop3
                                         lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 10:56:36 compute-2 lvm[2516]: PV /dev/loop3 not used.
Oct 09 10:56:36 compute-2 lvm[2518]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:56:36 compute-2 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 09 10:56:36 compute-2 lvm[2528]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:56:36 compute-2 lvm[2528]: VG ceph_vg0 finished
Oct 09 10:56:36 compute-2 lvm[2526]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 09 10:56:36 compute-2 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 09 10:56:36 compute-2 sudo[2511]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:36 compute-2 sudo[2604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzdfwdkimwdtfmxyqyqxrpqnqezxogba ; /usr/bin/python3'
Oct 09 10:56:36 compute-2 sudo[2604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:36 compute-2 python3[2606]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 09 10:56:36 compute-2 sudo[2604]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:37 compute-2 sudo[2677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxejbdjjqpghzdnwornadkbceguerogn ; /usr/bin/python3'
Oct 09 10:56:37 compute-2 sudo[2677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:37 compute-2 python3[2679]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760007395.7682526-33456-81489383159591/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 10:56:37 compute-2 sudo[2677]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:37 compute-2 sudo[2727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amjmuzijpwnztfzsntqbiqihvkuikzcs ; /usr/bin/python3'
Oct 09 10:56:37 compute-2 sudo[2727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:56:38 compute-2 python3[2729]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 10:56:38 compute-2 systemd[1]: Reloading.
Oct 09 10:56:38 compute-2 systemd-rc-local-generator[2756]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:56:38 compute-2 systemd-sysv-generator[2761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:56:38 compute-2 systemd[1]: Starting Ceph OSD losetup...
Oct 09 10:56:38 compute-2 bash[2768]: /dev/loop3: [64513]:4194934 (/var/lib/ceph-osd-0.img)
Oct 09 10:56:38 compute-2 systemd[1]: Finished Ceph OSD losetup.
Oct 09 10:56:38 compute-2 lvm[2769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:56:38 compute-2 lvm[2769]: VG ceph_vg0 finished
Oct 09 10:56:38 compute-2 sudo[2727]: pam_unix(sudo:session): session closed for user root
Oct 09 10:56:40 compute-2 python3[2793]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 10:57:46 compute-2 chronyd[850]: Selected source 172.97.210.214 (pool.ntp.org)
Oct 09 10:58:05 compute-2 sshd-session[2837]: Accepted publickey for ceph-admin from 192.168.122.100 port 60158 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:05 compute-2 systemd-logind[844]: New session 4 of user ceph-admin.
Oct 09 10:58:05 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Oct 09 10:58:05 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 09 10:58:05 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 09 10:58:05 compute-2 systemd[1]: Starting User Manager for UID 42477...
Oct 09 10:58:05 compute-2 systemd[2841]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:05 compute-2 systemd[2841]: Queued start job for default target Main User Target.
Oct 09 10:58:05 compute-2 rsyslogd[1312]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 10:58:05 compute-2 systemd[2841]: Created slice User Application Slice.
Oct 09 10:58:05 compute-2 systemd[2841]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:58:05 compute-2 systemd[2841]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 10:58:05 compute-2 systemd[2841]: Reached target Paths.
Oct 09 10:58:05 compute-2 systemd[2841]: Reached target Timers.
Oct 09 10:58:05 compute-2 systemd[2841]: Starting D-Bus User Message Bus Socket...
Oct 09 10:58:05 compute-2 systemd[2841]: Starting Create User's Volatile Files and Directories...
Oct 09 10:58:05 compute-2 systemd[2841]: Finished Create User's Volatile Files and Directories.
Oct 09 10:58:05 compute-2 systemd[2841]: Listening on D-Bus User Message Bus Socket.
Oct 09 10:58:05 compute-2 systemd[2841]: Reached target Sockets.
Oct 09 10:58:05 compute-2 systemd[2841]: Reached target Basic System.
Oct 09 10:58:05 compute-2 systemd[2841]: Reached target Main User Target.
Oct 09 10:58:05 compute-2 systemd[2841]: Startup finished in 103ms.
Oct 09 10:58:05 compute-2 systemd[1]: Started User Manager for UID 42477.
Oct 09 10:58:05 compute-2 systemd[1]: Started Session 4 of User ceph-admin.
Oct 09 10:58:05 compute-2 sshd-session[2855]: Accepted publickey for ceph-admin from 192.168.122.100 port 60162 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:05 compute-2 sshd-session[2837]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:05 compute-2 systemd-logind[844]: New session 6 of user ceph-admin.
Oct 09 10:58:05 compute-2 systemd[1]: Started Session 6 of User ceph-admin.
Oct 09 10:58:05 compute-2 sshd-session[2855]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:05 compute-2 sudo[2862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:58:05 compute-2 sudo[2862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:05 compute-2 sudo[2862]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:05 compute-2 sshd-session[2887]: Accepted publickey for ceph-admin from 192.168.122.100 port 60178 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:05 compute-2 systemd-logind[844]: New session 7 of user ceph-admin.
Oct 09 10:58:05 compute-2 systemd[1]: Started Session 7 of User ceph-admin.
Oct 09 10:58:05 compute-2 sshd-session[2887]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:05 compute-2 sudo[2891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-2
Oct 09 10:58:05 compute-2 sudo[2891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:05 compute-2 sudo[2891]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:05 compute-2 sshd-session[2916]: Accepted publickey for ceph-admin from 192.168.122.100 port 60186 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:05 compute-2 systemd-logind[844]: New session 8 of user ceph-admin.
Oct 09 10:58:05 compute-2 systemd[1]: Started Session 8 of User ceph-admin.
Oct 09 10:58:05 compute-2 sshd-session[2916]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:06 compute-2 sudo[2920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 09 10:58:06 compute-2 sudo[2920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:06 compute-2 sudo[2920]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:06 compute-2 sshd-session[2945]: Accepted publickey for ceph-admin from 192.168.122.100 port 60198 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:06 compute-2 systemd-logind[844]: New session 9 of user ceph-admin.
Oct 09 10:58:06 compute-2 systemd[1]: Started Session 9 of User ceph-admin.
Oct 09 10:58:06 compute-2 sshd-session[2945]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:06 compute-2 sudo[2949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:58:06 compute-2 sudo[2949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:06 compute-2 sudo[2949]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:06 compute-2 sshd-session[2974]: Accepted publickey for ceph-admin from 192.168.122.100 port 60206 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:06 compute-2 systemd-logind[844]: New session 10 of user ceph-admin.
Oct 09 10:58:06 compute-2 systemd[1]: Started Session 10 of User ceph-admin.
Oct 09 10:58:06 compute-2 sshd-session[2974]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:06 compute-2 sudo[2978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:58:06 compute-2 sudo[2978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:06 compute-2 sudo[2978]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:06 compute-2 sshd-session[3003]: Accepted publickey for ceph-admin from 192.168.122.100 port 60208 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:06 compute-2 systemd-logind[844]: New session 11 of user ceph-admin.
Oct 09 10:58:06 compute-2 systemd[1]: Started Session 11 of User ceph-admin.
Oct 09 10:58:06 compute-2 sshd-session[3003]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:07 compute-2 sudo[3007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 09 10:58:07 compute-2 sudo[3007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:07 compute-2 sudo[3007]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:07 compute-2 sshd-session[3032]: Accepted publickey for ceph-admin from 192.168.122.100 port 60224 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:07 compute-2 systemd-logind[844]: New session 12 of user ceph-admin.
Oct 09 10:58:07 compute-2 systemd[1]: Started Session 12 of User ceph-admin.
Oct 09 10:58:07 compute-2 sshd-session[3032]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:07 compute-2 sudo[3036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:58:07 compute-2 sudo[3036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:07 compute-2 sudo[3036]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:07 compute-2 sshd-session[3061]: Accepted publickey for ceph-admin from 192.168.122.100 port 60226 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:07 compute-2 systemd-logind[844]: New session 13 of user ceph-admin.
Oct 09 10:58:07 compute-2 systemd[1]: Started Session 13 of User ceph-admin.
Oct 09 10:58:07 compute-2 sshd-session[3061]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:07 compute-2 sudo[3065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 09 10:58:07 compute-2 sudo[3065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:07 compute-2 sudo[3065]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:07 compute-2 sshd-session[3090]: Accepted publickey for ceph-admin from 192.168.122.100 port 60232 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:07 compute-2 systemd-logind[844]: New session 14 of user ceph-admin.
Oct 09 10:58:07 compute-2 systemd[1]: Started Session 14 of User ceph-admin.
Oct 09 10:58:07 compute-2 sshd-session[3090]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:08 compute-2 sshd-session[3117]: Accepted publickey for ceph-admin from 192.168.122.100 port 60242 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:08 compute-2 systemd-logind[844]: New session 15 of user ceph-admin.
Oct 09 10:58:08 compute-2 systemd[1]: Started Session 15 of User ceph-admin.
Oct 09 10:58:08 compute-2 sshd-session[3117]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:09 compute-2 sudo[3121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 09 10:58:09 compute-2 sudo[3121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:09 compute-2 sudo[3121]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:09 compute-2 sshd-session[3146]: Accepted publickey for ceph-admin from 192.168.122.100 port 60254 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 10:58:09 compute-2 systemd-logind[844]: New session 16 of user ceph-admin.
Oct 09 10:58:09 compute-2 systemd[1]: Started Session 16 of User ceph-admin.
Oct 09 10:58:09 compute-2 sshd-session[3146]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 10:58:09 compute-2 sudo[3150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-2
Oct 09 10:58:09 compute-2 sudo[3150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat77288677-merged.mount: Deactivated successfully.
Oct 09 10:58:09 compute-2 kernel: evm: overlay not supported
Oct 09 10:58:09 compute-2 podman[3175]: 2025-10-09 10:58:09.78129134 +0000 UTC m=+0.146514821 system refresh
Oct 09 10:58:09 compute-2 sudo[3150]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:10 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:58:47 compute-2 sudo[3201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:58:47 compute-2 sudo[3201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:47 compute-2 sudo[3201]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:47 compute-2 sudo[3226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:58:47 compute-2 sudo[3226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:47 compute-2 sudo[3226]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:47 compute-2 sudo[3251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 10:58:47 compute-2 sudo[3251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:47 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:58:47 compute-2 systemd[1338]: Starting Mark boot as successful...
Oct 09 10:58:47 compute-2 systemd[1338]: Finished Mark boot as successful.
Oct 09 10:58:47 compute-2 sudo[3251]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:47 compute-2 sudo[3297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:58:47 compute-2 sudo[3297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:47 compute-2 sudo[3297]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:47 compute-2 sudo[3322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 10:58:47 compute-2 sudo[3322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:47 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:58:47 compute-2 sudo[3322]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:48 compute-2 sudo[3386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:58:48 compute-2 sudo[3386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:48 compute-2 sudo[3386]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:48 compute-2 sudo[3411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:58:48 compute-2 sudo[3411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:48 compute-2 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 3448 (sysctl)
Oct 09 10:58:48 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:58:48 compute-2 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 09 10:58:48 compute-2 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 09 10:58:48 compute-2 sudo[3411]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:48 compute-2 sudo[3470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:58:48 compute-2 sudo[3470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:48 compute-2 sudo[3470]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:48 compute-2 sudo[3495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 10:58:48 compute-2 sudo[3495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:49 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:58:49 compute-2 sudo[3495]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:49 compute-2 sudo[3539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:58:49 compute-2 sudo[3539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:49 compute-2 sudo[3539]: pam_unix(sudo:session): session closed for user root
Oct 09 10:58:49 compute-2 sudo[3564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -- inventory --format=json-pretty --filter-for-batch
Oct 09 10:58:49 compute-2 sudo[3564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:58:49 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:58:49 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:58:51 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat4000707470-lower\x2dmapped.mount: Deactivated successfully.
Oct 09 10:59:05 compute-2 podman[3626]: 2025-10-09 10:59:05.361220962 +0000 UTC m=+15.773531407 container create 7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_raman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 09 10:59:05 compute-2 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 09 10:59:05 compute-2 systemd[1]: Started libpod-conmon-7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef.scope.
Oct 09 10:59:05 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:05 compute-2 podman[3626]: 2025-10-09 10:59:05.345058724 +0000 UTC m=+15.757369169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:05 compute-2 podman[3626]: 2025-10-09 10:59:05.443338318 +0000 UTC m=+15.855648773 container init 7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct 09 10:59:05 compute-2 podman[3626]: 2025-10-09 10:59:05.449408614 +0000 UTC m=+15.861719059 container start 7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_raman, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 09 10:59:05 compute-2 podman[3626]: 2025-10-09 10:59:05.45519761 +0000 UTC m=+15.867508085 container attach 7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_raman, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 09 10:59:05 compute-2 modest_raman[3686]: 167 167
Oct 09 10:59:05 compute-2 systemd[1]: libpod-7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef.scope: Deactivated successfully.
Oct 09 10:59:05 compute-2 podman[3626]: 2025-10-09 10:59:05.459663272 +0000 UTC m=+15.871973717 container died 7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 09 10:59:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-0515aeb7fcc4ee481d85ec5656e16e1d9d7a50e8265f517eb30b4b588d3b4d42-merged.mount: Deactivated successfully.
Oct 09 10:59:05 compute-2 podman[3626]: 2025-10-09 10:59:05.489786224 +0000 UTC m=+15.902096669 container remove 7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_raman, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:05 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:05 compute-2 systemd[1]: libpod-conmon-7e732ffa3472dc95cfcf8a4160e60d8cd543c98599c050be2aef3a73dcbdf2ef.scope: Deactivated successfully.
Oct 09 10:59:05 compute-2 podman[3709]: 2025-10-09 10:59:05.629590866 +0000 UTC m=+0.037736061 container create 55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 09 10:59:05 compute-2 systemd[1]: Started libpod-conmon-55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10.scope.
Oct 09 10:59:05 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:05 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe39f1e415b230e76b72ccf3c6bc5c7a9ce257a23c9ab67c80f1e7a3e75fc0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:05 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fe39f1e415b230e76b72ccf3c6bc5c7a9ce257a23c9ab67c80f1e7a3e75fc0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:05 compute-2 podman[3709]: 2025-10-09 10:59:05.69482629 +0000 UTC m=+0.102971495 container init 55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ardinghelli, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Oct 09 10:59:05 compute-2 podman[3709]: 2025-10-09 10:59:05.701660661 +0000 UTC m=+0.109805856 container start 55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ardinghelli, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 09 10:59:05 compute-2 podman[3709]: 2025-10-09 10:59:05.705276424 +0000 UTC m=+0.113421619 container attach 55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ardinghelli, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 09 10:59:05 compute-2 podman[3709]: 2025-10-09 10:59:05.611216923 +0000 UTC m=+0.019362138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]: [
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:     {
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "available": false,
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "being_replaced": false,
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "ceph_device_lvm": false,
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "lsm_data": {},
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "lvs": [],
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "path": "/dev/sr0",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "rejected_reasons": [
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "Insufficient space (<5GB)",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "Has a FileSystem"
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         ],
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         "sys_api": {
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "actuators": null,
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "device_nodes": [
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:                 "sr0"
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             ],
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "devname": "sr0",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "human_readable_size": "482.00 KB",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "id_bus": "ata",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "model": "QEMU DVD-ROM",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "nr_requests": "2",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "parent": "/dev/sr0",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "partitions": {},
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "path": "/dev/sr0",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "removable": "1",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "rev": "2.5+",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "ro": "0",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "rotational": "0",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "sas_address": "",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "sas_device_handle": "",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "scheduler_mode": "mq-deadline",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "sectors": 0,
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "sectorsize": "2048",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "size": 493568.0,
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "support_discard": "2048",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "type": "disk",
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:             "vendor": "QEMU"
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:         }
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]:     }
Oct 09 10:59:06 compute-2 nostalgic_ardinghelli[3726]: ]
Oct 09 10:59:06 compute-2 systemd[1]: libpod-55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10.scope: Deactivated successfully.
Oct 09 10:59:06 compute-2 podman[3709]: 2025-10-09 10:59:06.354936555 +0000 UTC m=+0.763081750 container died 55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:06 compute-2 systemd[1]: var-lib-containers-storage-overlay-8fe39f1e415b230e76b72ccf3c6bc5c7a9ce257a23c9ab67c80f1e7a3e75fc0a-merged.mount: Deactivated successfully.
Oct 09 10:59:06 compute-2 podman[3709]: 2025-10-09 10:59:06.397784148 +0000 UTC m=+0.805929343 container remove 55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_ardinghelli, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 09 10:59:06 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:06 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:06 compute-2 systemd[1]: libpod-conmon-55ca55f035e9b15b3a80e0cf31e07cc49affbe422e22f2c30e867631ac145a10.scope: Deactivated successfully.
Oct 09 10:59:06 compute-2 sudo[3564]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:06 compute-2 sudo[4717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 10:59:06 compute-2 sudo[4717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:06 compute-2 sudo[4717]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:06 compute-2 sudo[4742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 10:59:06 compute-2 sudo[4742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:06 compute-2 sudo[4742]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:06 compute-2 sudo[4767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:06 compute-2 sudo[4767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:06 compute-2 sudo[4767]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:06 compute-2 sudo[4792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:06 compute-2 sudo[4792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:06 compute-2 sudo[4792]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:06 compute-2 sudo[4817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:06 compute-2 sudo[4817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:06 compute-2 sudo[4817]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:06 compute-2 sudo[4865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:06 compute-2 sudo[4865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[4865]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[4890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:07 compute-2 sudo[4890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[4890]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[4915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 10:59:07 compute-2 sudo[4915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[4915]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[4940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 10:59:07 compute-2 sudo[4940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[4940]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[4965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 10:59:07 compute-2 sudo[4965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[4965]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[4990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:07 compute-2 sudo[4990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[4990]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:07 compute-2 sudo[5015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5015]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:07 compute-2 sudo[5040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5040]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:07 compute-2 sudo[5088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5088]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:07 compute-2 sudo[5113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5113]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 10:59:07 compute-2 sudo[5138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5138]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 10:59:07 compute-2 sudo[5163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5163]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 10:59:07 compute-2 sudo[5188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5188]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 10:59:07 compute-2 sudo[5213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5213]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:07 compute-2 sudo[5238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:07 compute-2 sudo[5238]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:07 compute-2 sudo[5263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 10:59:07 compute-2 sudo[5263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5263]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 10:59:08 compute-2 sudo[5311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5311]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 10:59:08 compute-2 sudo[5336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5336]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 10:59:08 compute-2 sudo[5361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5361]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 10:59:08 compute-2 sudo[5386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5386]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 10:59:08 compute-2 sudo[5411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5411]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 10:59:08 compute-2 sudo[5436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5436]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:08 compute-2 sudo[5461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5461]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 10:59:08 compute-2 sudo[5486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5486]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 10:59:08 compute-2 sudo[5534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5534]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 10:59:08 compute-2 sudo[5559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5559]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 10:59:08 compute-2 sudo[5584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5584]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:08 compute-2 sudo[5609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:08 compute-2 sudo[5609]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:08 compute-2 sudo[5634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:08 compute-2 sudo[5634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:09 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:09 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:09 compute-2 podman[5699]: 2025-10-09 10:59:09.164324144 +0000 UTC m=+0.035027389 container create a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:09 compute-2 systemd[1]: Started libpod-conmon-a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e.scope.
Oct 09 10:59:09 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:09 compute-2 podman[5699]: 2025-10-09 10:59:09.218318116 +0000 UTC m=+0.089021401 container init a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:09 compute-2 podman[5699]: 2025-10-09 10:59:09.223939877 +0000 UTC m=+0.094643132 container start a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_grothendieck, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 09 10:59:09 compute-2 podman[5699]: 2025-10-09 10:59:09.22727802 +0000 UTC m=+0.097981385 container attach a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:09 compute-2 vigilant_grothendieck[5715]: 167 167
Oct 09 10:59:09 compute-2 systemd[1]: libpod-a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e.scope: Deactivated successfully.
Oct 09 10:59:09 compute-2 podman[5699]: 2025-10-09 10:59:09.228269204 +0000 UTC m=+0.098972449 container died a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 09 10:59:09 compute-2 podman[5699]: 2025-10-09 10:59:09.148032572 +0000 UTC m=+0.018735837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:09 compute-2 podman[5699]: 2025-10-09 10:59:09.276872223 +0000 UTC m=+0.147575468 container remove a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_grothendieck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:09 compute-2 systemd[1]: libpod-conmon-a6cd84e1394c38e665459d20ca50136f7fba1dcf80cc5df87ad005ad1ffc9e3e.scope: Deactivated successfully.
Oct 09 10:59:09 compute-2 podman[5732]: 2025-10-09 10:59:09.332007093 +0000 UTC m=+0.035012789 container create 5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 09 10:59:09 compute-2 systemd[1]: Started libpod-conmon-5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b.scope.
Oct 09 10:59:09 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a72af65a9cc6ce207857a756c35f5ef4c2dadcff3b83c58a80c0892f0e0f0f/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a72af65a9cc6ce207857a756c35f5ef4c2dadcff3b83c58a80c0892f0e0f0f/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a72af65a9cc6ce207857a756c35f5ef4c2dadcff3b83c58a80c0892f0e0f0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50a72af65a9cc6ce207857a756c35f5ef4c2dadcff3b83c58a80c0892f0e0f0f/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:09 compute-2 podman[5732]: 2025-10-09 10:59:09.396014715 +0000 UTC m=+0.099020421 container init 5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 10:59:09 compute-2 podman[5732]: 2025-10-09 10:59:09.402463893 +0000 UTC m=+0.105469569 container start 5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 09 10:59:09 compute-2 podman[5732]: 2025-10-09 10:59:09.404995059 +0000 UTC m=+0.108000735 container attach 5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_merkle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:09 compute-2 podman[5732]: 2025-10-09 10:59:09.315644718 +0000 UTC m=+0.018650414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:09 compute-2 systemd[1]: libpod-5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b.scope: Deactivated successfully.
Oct 09 10:59:09 compute-2 podman[5732]: 2025-10-09 10:59:09.475719349 +0000 UTC m=+0.178725045 container died 5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:09 compute-2 podman[5732]: 2025-10-09 10:59:09.507854799 +0000 UTC m=+0.210860475 container remove 5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_merkle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 09 10:59:09 compute-2 systemd[1]: libpod-conmon-5c230307aee22968e2ab84361338153203535ef56803d3f51186f8fb2fb54e4b.scope: Deactivated successfully.
Oct 09 10:59:09 compute-2 systemd[1]: Reloading.
Oct 09 10:59:09 compute-2 systemd-rc-local-generator[5813]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:09 compute-2 systemd-sysv-generator[5818]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:09 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:09 compute-2 systemd[1]: Reloading.
Oct 09 10:59:09 compute-2 systemd-sysv-generator[5855]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:09 compute-2 systemd-rc-local-generator[5848]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:09 compute-2 systemd[1]: Reached target All Ceph clusters and services.
Oct 09 10:59:09 compute-2 systemd[1]: Reloading.
Oct 09 10:59:10 compute-2 systemd-rc-local-generator[5890]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:10 compute-2 systemd-sysv-generator[5894]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:10 compute-2 systemd[1]: Reached target Ceph cluster e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 10:59:10 compute-2 systemd[1]: Reloading.
Oct 09 10:59:10 compute-2 systemd-rc-local-generator[5928]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:10 compute-2 systemd-sysv-generator[5932]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:10 compute-2 systemd[1]: Reloading.
Oct 09 10:59:10 compute-2 systemd-rc-local-generator[5967]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:10 compute-2 systemd-sysv-generator[5970]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:10 compute-2 systemd[1]: Created slice Slice /system/ceph-e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 10:59:10 compute-2 systemd[1]: Reached target System Time Set.
Oct 09 10:59:10 compute-2 systemd[1]: Reached target System Time Synchronized.
Oct 09 10:59:10 compute-2 systemd[1]: Starting Ceph mon.compute-2 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 10:59:10 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:10 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 10:59:10 compute-2 podman[6024]: 2025-10-09 10:59:10.893802977 +0000 UTC m=+0.034415568 container create 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 09 10:59:10 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbcdd76157cb27ee51ceb793728a2f4394e0d094b2c1d804d56ae078c5b5ad6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:10 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbcdd76157cb27ee51ceb793728a2f4394e0d094b2c1d804d56ae078c5b5ad6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:10 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbcdd76157cb27ee51ceb793728a2f4394e0d094b2c1d804d56ae078c5b5ad6/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:10 compute-2 podman[6024]: 2025-10-09 10:59:10.94605949 +0000 UTC m=+0.086672101 container init 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 10:59:10 compute-2 podman[6024]: 2025-10-09 10:59:10.95076972 +0000 UTC m=+0.091382311 container start 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 09 10:59:10 compute-2 bash[6024]: 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520
Oct 09 10:59:10 compute-2 podman[6024]: 2025-10-09 10:59:10.877899258 +0000 UTC m=+0.018511869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:10 compute-2 systemd[1]: Started Ceph mon.compute-2 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 10:59:10 compute-2 ceph-mon[6044]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 10:59:10 compute-2 ceph-mon[6044]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: pidfile_write: ignore empty --pid-file
Oct 09 10:59:10 compute-2 ceph-mon[6044]: load: jerasure load: lrc 
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: RocksDB version: 7.9.2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Git sha 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: DB SUMMARY
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: DB Session ID:  6DJJ1ETB7PHV9MCMB6A8
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: CURRENT file:  CURRENT
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                         Options.error_if_exists: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                       Options.create_if_missing: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                                     Options.env: 0x5654c9bd8c20
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                                Options.info_log: 0x5654cb699a20
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                              Options.statistics: (nil)
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                               Options.use_fsync: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                              Options.db_log_dir: 
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                                 Options.wal_dir: 
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                    Options.write_buffer_manager: 0x5654cb69d900
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.unordered_write: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                               Options.row_cache: None
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                              Options.wal_filter: None
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.two_write_queues: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.wal_compression: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.atomic_flush: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.max_background_jobs: 2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.max_background_compactions: -1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.max_subcompactions: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.max_total_wal_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                          Options.max_open_files: -1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:       Options.compaction_readahead_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Compression algorithms supported:
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kZSTD supported: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kXpressCompression supported: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kBZip2Compression supported: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kLZ4Compression supported: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kZlibCompression supported: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         kSnappyCompression supported: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:           Options.merge_operator: 
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5654cb6985c0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5654cb6bd350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:        Options.write_buffer_size: 33554432
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:  Options.max_write_buffer_number: 2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:          Options.compression: NoCompression
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e92a8cb1-44df-49f9-9e99-9e69cedae100
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007550988964, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007550990765, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007550, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e92a8cb1-44df-49f9-9e99-9e69cedae100", "db_session_id": "6DJJ1ETB7PHV9MCMB6A8", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007550990877, "job": 1, "event": "recovery_finished"}
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5654cb6bee00
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: DB pointer 0x5654cb7c8000
Oct 09 10:59:10 compute-2 ceph-mon[6044]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:59:10 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.17 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5654cb6bd350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 10:59:10 compute-2 ceph-mon[6044]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Oct 09 10:59:10 compute-2 ceph-mon[6044]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:10 compute-2 ceph-mon[6044]: mon.compute-2@-1(???) e0 preinit fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:11 compute-2 sudo[5634]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).mds e1 new map
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).mds e1 print_map
                                          e1
                                          btime 2025-10-09T10:57:16:742582+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: -1
                                           
                                          No filesystems configured
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                          service_name: mon
                                          placement:
                                            hosts:
                                            - compute-0
                                            - compute-1
                                            - compute-2
                                          ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                          service_name: mgr
                                          placement:
                                            hosts:
                                            - compute-0
                                            - compute-1
                                            - compute-2
                                          ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Deploying daemon crash.compute-1 on compute-1
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/4115323594' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='client.? 192.168.122.101:0/3173225471' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "248bff20-87f8-4fd3-80f7-ec7e50afb5f6"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/4115323594' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0ea02d81-16d9-4b32-9888-cc7ebc83243e"}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e4: 1 total, 0 up, 1 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='client.? 192.168.122.101:0/3173225471' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "248bff20-87f8-4fd3-80f7-ec7e50afb5f6"}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e5: 2 total, 0 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='client.? 192.168.122.101:0/2749783588' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/155858687' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Deploying daemon osd.0 on compute-0
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Deploying daemon osd.1 on compute-1
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/454562265' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e6: 2 total, 0 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e7: 2 total, 0 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: purged_snaps scrub starts
Oct 09 10:59:11 compute-2 ceph-mon[6044]: purged_snaps scrub ok
Oct 09 10:59:11 compute-2 ceph-mon[6044]: purged_snaps scrub starts
Oct 09 10:59:11 compute-2 ceph-mon[6044]: purged_snaps scrub ok
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Adjusting osd_memory_target on compute-1 to  5247M
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Adjusting osd_memory_target on compute-0 to 127.8M
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Unable to set osd_memory_target on compute-0 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: OSD bench result of 8841.967619 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osd.1 [v2:192.168.122.101:6800/2648504134,v1:192.168.122.101:6801/2648504134] boot
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e8: 2 total, 1 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: OSD bench result of 9039.559272 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osd.0 [v2:192.168.122.100:6802/3267775411,v1:192.168.122.100:6803/3267775411] boot
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e9: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v40: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e10: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e11: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v43: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mgrmap e9: compute-0.izrudc(active, since 81s)
Oct 09 10:59:11 compute-2 ceph-mon[6044]: osdmap e12: 2 total, 2 up, 2 in
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v45: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:11 compute-2 ceph-mon[6044]: pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Deploying daemon mon.compute-2 on compute-2
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 09 10:59:11 compute-2 ceph-mon[6044]: Cluster is now healthy
Oct 09 10:59:11 compute-2 ceph-mon[6044]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct 09 10:59:13 compute-2 ceph-mon[6044]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Oct 09 10:59:13 compute-2 ceph-mon[6044]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct 09 10:59:13 compute-2 ceph-mon[6044]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 09 10:59:13 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 10:59:14 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 09 10:59:14 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,os=Linux}
Oct 09 10:59:16 compute-2 ceph-mon[6044]: Deploying daemon mon.compute-1 on compute-1
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-0 calling monitor election
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-2 calling monitor election
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 09 10:59:16 compute-2 ceph-mon[6044]: monmap epoch 2
Oct 09 10:59:16 compute-2 ceph-mon[6044]: fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:16 compute-2 ceph-mon[6044]: last_changed 2025-10-09T10:59:11.081153+0000
Oct 09 10:59:16 compute-2 ceph-mon[6044]: created 2025-10-09T10:57:14.796633+0000
Oct 09 10:59:16 compute-2 ceph-mon[6044]: min_mon_release 19 (squid)
Oct 09 10:59:16 compute-2 ceph-mon[6044]: election_strategy: 1
Oct 09 10:59:16 compute-2 ceph-mon[6044]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 09 10:59:16 compute-2 ceph-mon[6044]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 09 10:59:16 compute-2 ceph-mon[6044]: fsmap 
Oct 09 10:59:16 compute-2 ceph-mon[6044]: osdmap e12: 2 total, 2 up, 2 in
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mgrmap e9: compute-0.izrudc(active, since 99s)
Oct 09 10:59:16 compute-2 ceph-mon[6044]: overall HEALTH_OK
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:16 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 10:59:16 compute-2 sudo[6083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:16 compute-2 sudo[6083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:16 compute-2 sudo[6083]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:16 compute-2 sudo[6108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:16 compute-2 sudo[6108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 09 10:59:16 compute-2 ceph-mon[6044]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct 09 10:59:16 compute-2 ceph-mon[6044]: paxos.1).electionLogic(10) init, last seen epoch 10
Oct 09 10:59:16 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 10:59:16 compute-2 podman[6171]: 2025-10-09 10:59:16.678233738 +0000 UTC m=+0.045869357 container create 4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 09 10:59:16 compute-2 systemd[1]: Started libpod-conmon-4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24.scope.
Oct 09 10:59:16 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:16 compute-2 podman[6171]: 2025-10-09 10:59:16.756602266 +0000 UTC m=+0.124237895 container init 4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 10:59:16 compute-2 podman[6171]: 2025-10-09 10:59:16.662832935 +0000 UTC m=+0.030468574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:16 compute-2 podman[6171]: 2025-10-09 10:59:16.76259787 +0000 UTC m=+0.130233489 container start 4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 09 10:59:16 compute-2 podman[6171]: 2025-10-09 10:59:16.76556084 +0000 UTC m=+0.133196459 container attach 4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 09 10:59:16 compute-2 modest_mcclintock[6187]: 167 167
Oct 09 10:59:16 compute-2 systemd[1]: libpod-4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24.scope: Deactivated successfully.
Oct 09 10:59:16 compute-2 conmon[6187]: conmon 4ef9aeeb8c5a1a72fee2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24.scope/container/memory.events
Oct 09 10:59:16 compute-2 podman[6171]: 2025-10-09 10:59:16.770142196 +0000 UTC m=+0.137777815 container died 4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:16 compute-2 systemd[1]: var-lib-containers-storage-overlay-2f38635d08deb090b80087e2a91c826a248d04c241e7c79a171b94146da79270-merged.mount: Deactivated successfully.
Oct 09 10:59:16 compute-2 podman[6171]: 2025-10-09 10:59:16.806059614 +0000 UTC m=+0.173695233 container remove 4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_mcclintock, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 09 10:59:16 compute-2 systemd[1]: libpod-conmon-4ef9aeeb8c5a1a72fee22038bbfe641fdc008e22954b6f62309513b4f8425f24.scope: Deactivated successfully.
Oct 09 10:59:16 compute-2 systemd[1]: Reloading.
Oct 09 10:59:16 compute-2 systemd-rc-local-generator[6229]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:16 compute-2 systemd-sysv-generator[6233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:17 compute-2 systemd[1]: Reloading.
Oct 09 10:59:17 compute-2 systemd-rc-local-generator[6271]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:17 compute-2 systemd-sysv-generator[6274]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:17 compute-2 systemd[1]: Starting Ceph mgr.compute-2.agiurv for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 10:59:17 compute-2 podman[6328]: 2025-10-09 10:59:17.563444798 +0000 UTC m=+0.087412156 container create ddd2c1d76807d7d80c8c7d434554f560d4b526b465a554a213288d4aa7a4cbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 09 10:59:17 compute-2 podman[6328]: 2025-10-09 10:59:17.495024367 +0000 UTC m=+0.018991745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702331c562119680df8ad9672f0516b9366132402aba4b1c39bee520745ce6d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702331c562119680df8ad9672f0516b9366132402aba4b1c39bee520745ce6d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702331c562119680df8ad9672f0516b9366132402aba4b1c39bee520745ce6d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/702331c562119680df8ad9672f0516b9366132402aba4b1c39bee520745ce6d8/merged/var/lib/ceph/mgr/ceph-compute-2.agiurv supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:17 compute-2 podman[6328]: 2025-10-09 10:59:17.625296487 +0000 UTC m=+0.149263855 container init ddd2c1d76807d7d80c8c7d434554f560d4b526b465a554a213288d4aa7a4cbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Oct 09 10:59:17 compute-2 podman[6328]: 2025-10-09 10:59:17.629432227 +0000 UTC m=+0.153399585 container start ddd2c1d76807d7d80c8c7d434554f560d4b526b465a554a213288d4aa7a4cbeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:59:17 compute-2 bash[6328]: ddd2c1d76807d7d80c8c7d434554f560d4b526b465a554a213288d4aa7a4cbeb
Oct 09 10:59:17 compute-2 systemd[1]: Started Ceph mgr.compute-2.agiurv for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 10:59:17 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:17 compute-2 sudo[6108]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:17 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:18 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:19 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:20 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:20 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 10:59:21 compute-2 ceph-mon[6044]: Deploying daemon mgr.compute-2.agiurv on compute-2
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mon.compute-0 calling monitor election
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mon.compute-2 calling monitor election
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1982754876' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mon.compute-1 calling monitor election
Oct 09 10:59:21 compute-2 ceph-mon[6044]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 09 10:59:21 compute-2 ceph-mon[6044]: monmap epoch 3
Oct 09 10:59:21 compute-2 ceph-mon[6044]: fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:21 compute-2 ceph-mon[6044]: last_changed 2025-10-09T10:59:16.540045+0000
Oct 09 10:59:21 compute-2 ceph-mon[6044]: created 2025-10-09T10:57:14.796633+0000
Oct 09 10:59:21 compute-2 ceph-mon[6044]: min_mon_release 19 (squid)
Oct 09 10:59:21 compute-2 ceph-mon[6044]: election_strategy: 1
Oct 09 10:59:21 compute-2 ceph-mon[6044]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 09 10:59:21 compute-2 ceph-mon[6044]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 09 10:59:21 compute-2 ceph-mon[6044]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 09 10:59:21 compute-2 ceph-mon[6044]: fsmap 
Oct 09 10:59:21 compute-2 ceph-mon[6044]: osdmap e12: 2 total, 2 up, 2 in
Oct 09 10:59:21 compute-2 ceph-mon[6044]: mgrmap e9: compute-0.izrudc(active, since 105s)
Oct 09 10:59:21 compute-2 ceph-mon[6044]: overall HEALTH_OK
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:21 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:22 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: pidfile_write: ignore empty --pid-file
Oct 09 10:59:22 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'alerts'
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'balancer'
Oct 09 10:59:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:22.195+0000 7fc1d546d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 10:59:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'cephadm'
Oct 09 10:59:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:22.274+0000 7fc1d546d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 10:59:22 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:22 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rtiqvm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 10:59:22 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rtiqvm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 09 10:59:22 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:59:22 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:22 compute-2 ceph-mon[6044]: Deploying daemon mgr.compute-1.rtiqvm on compute-1
Oct 09 10:59:22 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2849695026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:22 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 10:59:22 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e13 e13: 2 total, 2 up, 2 in
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'crash'
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'dashboard'
Oct 09 10:59:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:23.077+0000 7fc1d546d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 10:59:23 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:23 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct 09 10:59:23 compute-2 sudo[6380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:23 compute-2 sudo[6380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:23 compute-2 sudo[6380]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'devicehealth'
Oct 09 10:59:23 compute-2 sudo[6405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:23 compute-2 sudo[6405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 10:59:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:23.722+0000 7fc1d546d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 10:59:23 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e14 e14: 2 total, 2 up, 2 in
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2849695026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 10:59:23 compute-2 ceph-mon[6044]: osdmap e13: 2 total, 2 up, 2 in
Oct 09 10:59:23 compute-2 ceph-mon[6044]: pgmap v58: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/140059955' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 09 10:59:23 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 10:59:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 10:59:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]:   from numpy import show_config as show_numpy_config
Oct 09 10:59:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:23.884+0000 7fc1d546d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'influx'
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 10:59:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'insights'
Oct 09 10:59:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:23.955+0000 7fc1d546d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 10:59:24 compute-2 podman[6471]: 2025-10-09 10:59:24.026674558 +0000 UTC m=+0.036257831 container create 7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_tesla, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'iostat'
Oct 09 10:59:24 compute-2 systemd[1]: Started libpod-conmon-7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5.scope.
Oct 09 10:59:24 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:24 compute-2 ceph-mgr[6348]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 10:59:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'k8sevents'
Oct 09 10:59:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:24.100+0000 7fc1d546d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 10:59:24 compute-2 podman[6471]: 2025-10-09 10:59:24.104220549 +0000 UTC m=+0.113803852 container init 7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_tesla, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 10:59:24 compute-2 podman[6471]: 2025-10-09 10:59:24.009608889 +0000 UTC m=+0.019192192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:24 compute-2 podman[6471]: 2025-10-09 10:59:24.110994439 +0000 UTC m=+0.120577722 container start 7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_tesla, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:59:24 compute-2 podman[6471]: 2025-10-09 10:59:24.11427401 +0000 UTC m=+0.123857313 container attach 7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 09 10:59:24 compute-2 systemd[1]: libpod-7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5.scope: Deactivated successfully.
Oct 09 10:59:24 compute-2 dreamy_tesla[6488]: 167 167
Oct 09 10:59:24 compute-2 conmon[6488]: conmon 7f04c4daf49946634085 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5.scope/container/memory.events
Oct 09 10:59:24 compute-2 podman[6471]: 2025-10-09 10:59:24.119079883 +0000 UTC m=+0.128663166 container died 7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct 09 10:59:24 compute-2 systemd[1]: var-lib-containers-storage-overlay-396d59a55f255feb81246bfb2c68ecbb090ed83ba9ce3df9f6dc33f63f052e50-merged.mount: Deactivated successfully.
Oct 09 10:59:24 compute-2 podman[6471]: 2025-10-09 10:59:24.150679185 +0000 UTC m=+0.160262468 container remove 7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:24 compute-2 systemd[1]: libpod-conmon-7f04c4daf49946634085f9cce6b8dab565267df0f664eed0a1c7280dc9a4bfe5.scope: Deactivated successfully.
Oct 09 10:59:24 compute-2 systemd[1]: Reloading.
Oct 09 10:59:24 compute-2 systemd-rc-local-generator[6533]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:24 compute-2 systemd-sysv-generator[6538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:24 compute-2 systemd[1]: Reloading.
Oct 09 10:59:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'localpool'
Oct 09 10:59:24 compute-2 systemd-rc-local-generator[6569]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:24 compute-2 systemd-sysv-generator[6573]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 10:59:24 compute-2 systemd[1]: Starting Ceph crash.compute-2 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 10:59:24 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e15 e15: 2 total, 2 up, 2 in
Oct 09 10:59:24 compute-2 ceph-mon[6044]: Deploying daemon crash.compute-2 on compute-2
Oct 09 10:59:24 compute-2 ceph-mon[6044]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 10:59:24 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/140059955' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 10:59:24 compute-2 ceph-mon[6044]: osdmap e14: 2 total, 2 up, 2 in
Oct 09 10:59:24 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3484103392' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:24 compute-2 ceph-mon[6044]: pgmap v60: 3 pgs: 2 active+clean, 1 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:24 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3484103392' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 10:59:24 compute-2 ceph-mon[6044]: osdmap e15: 2 total, 2 up, 2 in
Oct 09 10:59:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mirroring'
Oct 09 10:59:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'nfs'
Oct 09 10:59:24 compute-2 podman[6631]: 2025-10-09 10:59:24.892118538 +0000 UTC m=+0.033234338 container create 297b505cc29c8950d092bfbc42956b05a5e83d45c2c191a426c40abe0aeff804 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:59:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74dbd1b2a46801b15fbc9c78fded8ee7d15356d8f6ee38d036e5c62573edd441/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74dbd1b2a46801b15fbc9c78fded8ee7d15356d8f6ee38d036e5c62573edd441/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74dbd1b2a46801b15fbc9c78fded8ee7d15356d8f6ee38d036e5c62573edd441/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:24 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74dbd1b2a46801b15fbc9c78fded8ee7d15356d8f6ee38d036e5c62573edd441/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:24 compute-2 podman[6631]: 2025-10-09 10:59:24.942969774 +0000 UTC m=+0.084085604 container init 297b505cc29c8950d092bfbc42956b05a5e83d45c2c191a426c40abe0aeff804 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 09 10:59:24 compute-2 podman[6631]: 2025-10-09 10:59:24.948786271 +0000 UTC m=+0.089902071 container start 297b505cc29c8950d092bfbc42956b05a5e83d45c2c191a426c40abe0aeff804 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct 09 10:59:24 compute-2 bash[6631]: 297b505cc29c8950d092bfbc42956b05a5e83d45c2c191a426c40abe0aeff804
Oct 09 10:59:24 compute-2 podman[6631]: 2025-10-09 10:59:24.878292959 +0000 UTC m=+0.019408779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:24 compute-2 systemd[1]: Started Ceph crash.compute-2 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 10:59:24 compute-2 sudo[6405]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.095+0000 7fb8219a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.095+0000 7fb8219a7640 -1 AuthRegistry(0x7fb81c06a3c0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.096+0000 7fb8219a7640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.096+0000 7fb8219a7640 -1 AuthRegistry(0x7fb8219a5ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.097+0000 7fb81affd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.098+0000 7fb81a7fc640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.098+0000 7fb81b7fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: 2025-10-09T10:59:25.098+0000 7fb8219a7640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-crash-compute-2[6648]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 09 10:59:25 compute-2 sudo[6655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:25 compute-2 sudo[6655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:25 compute-2 sudo[6655]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:25.136+0000 7fc1d546d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'orchestrator'
Oct 09 10:59:25 compute-2 sudo[6690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid e990987d-9393-5e96-99ae-9e3a3319f191 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 09 10:59:25 compute-2 sudo[6690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:25.371+0000 7fc1d546d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_support'
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:25.459+0000 7fc1d546d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 podman[6756]: 2025-10-09 10:59:25.478359697 +0000 UTC m=+0.035019229 container create 2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 09 10:59:25 compute-2 systemd[1]: Started libpod-conmon-2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe.scope.
Oct 09 10:59:25 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:25.533+0000 7fc1d546d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 podman[6756]: 2025-10-09 10:59:25.53829426 +0000 UTC m=+0.094953812 container init 2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:25 compute-2 podman[6756]: 2025-10-09 10:59:25.544323025 +0000 UTC m=+0.100982557 container start 2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_torvalds, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 09 10:59:25 compute-2 podman[6756]: 2025-10-09 10:59:25.548625061 +0000 UTC m=+0.105284613 container attach 2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 10:59:25 compute-2 jovial_torvalds[6773]: 167 167
Oct 09 10:59:25 compute-2 podman[6756]: 2025-10-09 10:59:25.549446549 +0000 UTC m=+0.106106081 container died 2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 10:59:25 compute-2 systemd[1]: libpod-2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe.scope: Deactivated successfully.
Oct 09 10:59:25 compute-2 podman[6756]: 2025-10-09 10:59:25.460858853 +0000 UTC m=+0.017518405 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:25 compute-2 systemd[1]: var-lib-containers-storage-overlay-e06190d23d15de5d85fc3fe409114a1390991b9fafc6f75828fd69e534b1e678-merged.mount: Deactivated successfully.
Oct 09 10:59:25 compute-2 podman[6756]: 2025-10-09 10:59:25.581140204 +0000 UTC m=+0.137799736 container remove 2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 09 10:59:25 compute-2 systemd[1]: libpod-conmon-2ef88b73eeaef256eb9ec7922406fad44d2c2ac42f535718aab26b9a3b1fbabe.scope: Deactivated successfully.
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:25.616+0000 7fc1d546d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'progress'
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'prometheus'
Oct 09 10:59:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:25.694+0000 7fc1d546d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 10:59:25 compute-2 podman[6797]: 2025-10-09 10:59:25.775822559 +0000 UTC m=+0.095315795 container create 3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_chebyshev, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:25 compute-2 podman[6797]: 2025-10-09 10:59:25.710630267 +0000 UTC m=+0.030123513 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:25 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e16 e16: 2 total, 2 up, 2 in
Oct 09 10:59:25 compute-2 systemd[1]: Started libpod-conmon-3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb.scope.
Oct 09 10:59:25 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721b8a1d8d19931974a1e67adfcfb5f21831e17a2662783d89625e00fd04fa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721b8a1d8d19931974a1e67adfcfb5f21831e17a2662783d89625e00fd04fa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721b8a1d8d19931974a1e67adfcfb5f21831e17a2662783d89625e00fd04fa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721b8a1d8d19931974a1e67adfcfb5f21831e17a2662783d89625e00fd04fa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:25 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1721b8a1d8d19931974a1e67adfcfb5f21831e17a2662783d89625e00fd04fa3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:25 compute-2 podman[6797]: 2025-10-09 10:59:25.853636129 +0000 UTC m=+0.173129355 container init 3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_chebyshev, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 09 10:59:25 compute-2 podman[6797]: 2025-10-09 10:59:25.859725555 +0000 UTC m=+0.179218771 container start 3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_chebyshev, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 09 10:59:25 compute-2 podman[6797]: 2025-10-09 10:59:25.862548241 +0000 UTC m=+0.182041497 container attach 3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_chebyshev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:26 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e16 _set_new_cache_sizes cache_size:1019935462 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1257724780' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:26 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1257724780' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 10:59:26 compute-2 ceph-mon[6044]: osdmap e16: 2 total, 2 up, 2 in
Oct 09 10:59:26 compute-2 ceph-mgr[6348]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 10:59:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rbd_support'
Oct 09 10:59:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:26.060+0000 7fc1d546d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 10:59:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:26.159+0000 7fc1d546d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 10:59:26 compute-2 ceph-mgr[6348]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 10:59:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'restful'
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: --> passed data devices: 0 physical, 1 LVM
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 393e0a31-7936-4f03-9f0e-662e76b72949
Oct 09 10:59:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rgw'
Oct 09 10:59:26 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"} v 0)
Oct 09 10:59:26 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1480029380' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]: dispatch
Oct 09 10:59:26 compute-2 ceph-mgr[6348]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 10:59:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rook'
Oct 09 10:59:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:26.609+0000 7fc1d546d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 10:59:26 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e17 e17: 3 total, 2 up, 3 in
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 09 10:59:26 compute-2 lvm[6874]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:59:26 compute-2 lvm[6874]: VG ceph_vg0 finished
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:26 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 09 10:59:27 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3025487584' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:27 compute-2 ceph-mon[6044]: from='client.? 192.168.122.102:0/1480029380' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]: dispatch
Oct 09 10:59:27 compute-2 ceph-mon[6044]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]: dispatch
Oct 09 10:59:27 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3025487584' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 10:59:27 compute-2 ceph-mon[6044]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "393e0a31-7936-4f03-9f0e-662e76b72949"}]': finished
Oct 09 10:59:27 compute-2 ceph-mon[6044]: osdmap e17: 3 total, 2 up, 3 in
Oct 09 10:59:27 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:27 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:27 compute-2 ceph-mon[6044]: pgmap v64: 6 pgs: 2 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:27 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 09 10:59:27 compute-2 ceph-mon[6044]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/236533022' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 09 10:59:27 compute-2 angry_chebyshev[6813]:  stderr: got monmap epoch 3
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'selftest'
Oct 09 10:59:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:27.184+0000 7fc1d546d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 angry_chebyshev[6813]: --> Creating keyring file for osd.2
Oct 09 10:59:27 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 09 10:59:27 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 09 10:59:27 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 393e0a31-7936-4f03-9f0e-662e76b72949 --setuser ceph --setgroup ceph
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:27.259+0000 7fc1d546d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'snap_schedule'
Oct 09 10:59:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:27.339+0000 7fc1d546d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'stats'
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'status'
Oct 09 10:59:27 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Oct 09 10:59:27 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1791307082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:27.486+0000 7fc1d546d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telegraf'
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telemetry'
Oct 09 10:59:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:27.557+0000 7fc1d546d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e18 e18: 3 total, 2 up, 3 in
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 10:59:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:27.745+0000 7fc1d546d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'volumes'
Oct 09 10:59:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:27.979+0000 7fc1d546d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:28 compute-2 ceph-mon[6044]: from='client.? 192.168.122.102:0/236533022' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 09 10:59:28 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1791307082' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:28 compute-2 ceph-mon[6044]: from='client.? ' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 10:59:28 compute-2 ceph-mon[6044]: from='client.? ' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 10:59:28 compute-2 ceph-mon[6044]: osdmap e18: 3 total, 2 up, 3 in
Oct 09 10:59:28 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:28 compute-2 ceph-mgr[6348]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 10:59:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'zabbix'
Oct 09 10:59:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:28.264+0000 7fc1d546d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 10:59:28 compute-2 ceph-mgr[6348]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 10:59:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:28.343+0000 7fc1d546d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 10:59:28 compute-2 ceph-mgr[6348]: ms_deliver_dispatch: unhandled message 0x557b431b2d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 09 10:59:28 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e19 e19: 3 total, 2 up, 3 in
Oct 09 10:59:29 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv started
Oct 09 10:59:29 compute-2 ceph-mon[6044]: osdmap e19: 3 total, 2 up, 3 in
Oct 09 10:59:29 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:29 compute-2 ceph-mon[6044]: pgmap v67: 7 pgs: 3 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:29 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3743570852' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 09 10:59:29 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e20 e20: 3 total, 2 up, 3 in
Oct 09 10:59:30 compute-2 ceph-mon[6044]: mgrmap e10: compute-0.izrudc(active, since 112s), standbys: compute-2.agiurv
Oct 09 10:59:30 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct 09 10:59:30 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:30 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:30 compute-2 ceph-mon[6044]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 10:59:30 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3743570852' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 09 10:59:30 compute-2 ceph-mon[6044]: osdmap e20: 3 total, 2 up, 3 in
Oct 09 10:59:30 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]:  stderr: 2025-10-09T10:59:27.298+0000 7fdb29b65740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]:  stderr: 2025-10-09T10:59:27.565+0000 7fdb29b65740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 09 10:59:30 compute-2 angry_chebyshev[6813]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 09 10:59:30 compute-2 systemd[1]: libpod-3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb.scope: Deactivated successfully.
Oct 09 10:59:30 compute-2 systemd[1]: libpod-3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb.scope: Consumed 1.913s CPU time.
Oct 09 10:59:30 compute-2 podman[6797]: 2025-10-09 10:59:30.720284003 +0000 UTC m=+5.039777239 container died 3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Oct 09 10:59:30 compute-2 systemd[1]: var-lib-containers-storage-overlay-1721b8a1d8d19931974a1e67adfcfb5f21831e17a2662783d89625e00fd04fa3-merged.mount: Deactivated successfully.
Oct 09 10:59:30 compute-2 podman[6797]: 2025-10-09 10:59:30.77560649 +0000 UTC m=+5.095099706 container remove 3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1)
Oct 09 10:59:30 compute-2 systemd[1]: libpod-conmon-3bca87fb75599f9dadec33756adba06e74b2ad47e2899b67ae2014e8a84447eb.scope: Deactivated successfully.
Oct 09 10:59:30 compute-2 sudo[6690]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:30 compute-2 sudo[7808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:30 compute-2 sudo[7808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:30 compute-2 sudo[7808]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:30 compute-2 sudo[7833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -- lvm list --format json
Oct 09 10:59:30 compute-2 sudo[7833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:31 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e20 _set_new_cache_sizes cache_size:1020053159 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 10:59:31 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm started
Oct 09 10:59:31 compute-2 ceph-mon[6044]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:31 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2440602364' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 09 10:59:31 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e21 e21: 3 total, 2 up, 3 in
Oct 09 10:59:31 compute-2 podman[7895]: 2025-10-09 10:59:31.332053658 +0000 UTC m=+0.034946796 container create c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_merkle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:31 compute-2 systemd[1]: Started libpod-conmon-c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1.scope.
Oct 09 10:59:31 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:31 compute-2 podman[7895]: 2025-10-09 10:59:31.406613538 +0000 UTC m=+0.109506696 container init c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 09 10:59:31 compute-2 podman[7895]: 2025-10-09 10:59:31.412901971 +0000 UTC m=+0.115795109 container start c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:31 compute-2 podman[7895]: 2025-10-09 10:59:31.316720948 +0000 UTC m=+0.019614106 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:31 compute-2 podman[7895]: 2025-10-09 10:59:31.416219824 +0000 UTC m=+0.119112982 container attach c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 10:59:31 compute-2 eager_merkle[7911]: 167 167
Oct 09 10:59:31 compute-2 systemd[1]: libpod-c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1.scope: Deactivated successfully.
Oct 09 10:59:31 compute-2 podman[7895]: 2025-10-09 10:59:31.417463286 +0000 UTC m=+0.120356424 container died c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_merkle, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 09 10:59:31 compute-2 systemd[1]: var-lib-containers-storage-overlay-2a784b0ec80a4189419730582726abd09e80f596bbee7c2527c6ecb35ce4b314-merged.mount: Deactivated successfully.
Oct 09 10:59:31 compute-2 podman[7895]: 2025-10-09 10:59:31.455695713 +0000 UTC m=+0.158588851 container remove c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 09 10:59:31 compute-2 systemd[1]: libpod-conmon-c765c3913bd727f3d6a2ef2c034197dd27ec637b72968794f6793cc881a1afa1.scope: Deactivated successfully.
Oct 09 10:59:31 compute-2 podman[7935]: 2025-10-09 10:59:31.625194373 +0000 UTC m=+0.036893652 container create 319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_johnson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct 09 10:59:31 compute-2 systemd[1]: Started libpod-conmon-319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683.scope.
Oct 09 10:59:31 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdb078ada5d2385d789e6c8cc7bbe6d8f6db23311d96792f41adf4975e5e38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdb078ada5d2385d789e6c8cc7bbe6d8f6db23311d96792f41adf4975e5e38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdb078ada5d2385d789e6c8cc7bbe6d8f6db23311d96792f41adf4975e5e38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ffdb078ada5d2385d789e6c8cc7bbe6d8f6db23311d96792f41adf4975e5e38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:31 compute-2 podman[7935]: 2025-10-09 10:59:31.690909512 +0000 UTC m=+0.102608891 container init 319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_johnson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:31 compute-2 podman[7935]: 2025-10-09 10:59:31.698079985 +0000 UTC m=+0.109779284 container start 319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 09 10:59:31 compute-2 podman[7935]: 2025-10-09 10:59:31.702316449 +0000 UTC m=+0.114015738 container attach 319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 09 10:59:31 compute-2 podman[7935]: 2025-10-09 10:59:31.611493198 +0000 UTC m=+0.023192507 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:31 compute-2 frosty_johnson[7951]: {
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:     "2": [
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:         {
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "devices": [
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "/dev/loop3"
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             ],
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "lv_name": "ceph_lv0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "lv_size": "21470642176",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qKZcdt-TcIi-O4u1-aXX2-8aFa-Z4sJ-nFPLz7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e990987d-9393-5e96-99ae-9e3a3319f191,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=393e0a31-7936-4f03-9f0e-662e76b72949,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "lv_uuid": "qKZcdt-TcIi-O4u1-aXX2-8aFa-Z4sJ-nFPLz7",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "name": "ceph_lv0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "tags": {
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.block_uuid": "qKZcdt-TcIi-O4u1-aXX2-8aFa-Z4sJ-nFPLz7",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.cephx_lockbox_secret": "",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.cluster_fsid": "e990987d-9393-5e96-99ae-9e3a3319f191",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.cluster_name": "ceph",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.crush_device_class": "",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.encrypted": "0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.osd_fsid": "393e0a31-7936-4f03-9f0e-662e76b72949",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.osd_id": "2",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.type": "block",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.vdo": "0",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:                 "ceph.with_tpm": "0"
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             },
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "type": "block",
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:             "vg_name": "ceph_vg0"
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:         }
Oct 09 10:59:31 compute-2 frosty_johnson[7951]:     ]
Oct 09 10:59:31 compute-2 frosty_johnson[7951]: }
Oct 09 10:59:31 compute-2 systemd[1]: libpod-319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683.scope: Deactivated successfully.
Oct 09 10:59:31 compute-2 podman[7935]: 2025-10-09 10:59:31.975362912 +0000 UTC m=+0.387062221 container died 319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 09 10:59:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-1ffdb078ada5d2385d789e6c8cc7bbe6d8f6db23311d96792f41adf4975e5e38-merged.mount: Deactivated successfully.
Oct 09 10:59:32 compute-2 podman[7935]: 2025-10-09 10:59:32.017637856 +0000 UTC m=+0.429337145 container remove 319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 09 10:59:32 compute-2 systemd[1]: libpod-conmon-319d756fc1a9cf85f1f92309f8e677ea9d8b5d3b991ed14c0eb2f18addde0683.scope: Deactivated successfully.
Oct 09 10:59:32 compute-2 sudo[7833]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:32 compute-2 sudo[7973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:32 compute-2 sudo[7973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:32 compute-2 sudo[7973]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:32 compute-2 sudo[7998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:32 compute-2 sudo[7998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:32 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2440602364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 09 10:59:32 compute-2 ceph-mon[6044]: osdmap e21: 3 total, 2 up, 3 in
Oct 09 10:59:32 compute-2 ceph-mon[6044]: mgrmap e11: compute-0.izrudc(active, since 115s), standbys: compute-2.agiurv, compute-1.rtiqvm
Oct 09 10:59:32 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:32 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct 09 10:59:32 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2976644364' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 09 10:59:32 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 09 10:59:32 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:32 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e22 e22: 3 total, 2 up, 3 in
Oct 09 10:59:32 compute-2 podman[8063]: 2025-10-09 10:59:32.583224464 +0000 UTC m=+0.050232035 container create fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:59:32 compute-2 systemd[1]: Started libpod-conmon-fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65.scope.
Oct 09 10:59:32 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:32 compute-2 podman[8063]: 2025-10-09 10:59:32.566492737 +0000 UTC m=+0.033500328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:32 compute-2 podman[8063]: 2025-10-09 10:59:32.672913467 +0000 UTC m=+0.139921058 container init fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shockley, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:32 compute-2 podman[8063]: 2025-10-09 10:59:32.678952652 +0000 UTC m=+0.145960243 container start fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shockley, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:32 compute-2 trusting_shockley[8079]: 167 167
Oct 09 10:59:32 compute-2 systemd[1]: libpod-fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65.scope: Deactivated successfully.
Oct 09 10:59:32 compute-2 podman[8063]: 2025-10-09 10:59:32.687061107 +0000 UTC m=+0.154068718 container attach fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shockley, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:32 compute-2 podman[8063]: 2025-10-09 10:59:32.687878545 +0000 UTC m=+0.154886126 container died fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shockley, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 10:59:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-3409268aa26e9d89931e543ba47e67ec70099093a580b4823e4e8c31ee3b484c-merged.mount: Deactivated successfully.
Oct 09 10:59:32 compute-2 podman[8063]: 2025-10-09 10:59:32.745481079 +0000 UTC m=+0.212488650 container remove fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:59:32 compute-2 systemd[1]: libpod-conmon-fc4f9c2fc209a04935e063ea3651957ad684564a117dfbe02e7f0922200d7c65.scope: Deactivated successfully.
Oct 09 10:59:33 compute-2 podman[8110]: 2025-10-09 10:59:33.001330789 +0000 UTC m=+0.045197514 container create fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:33 compute-2 systemd[1]: Started libpod-conmon-fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f.scope.
Oct 09 10:59:33 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea8eeafe5c9945be133d72e37b11b93c5386afa98fdd9dcffe37290a417a86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea8eeafe5c9945be133d72e37b11b93c5386afa98fdd9dcffe37290a417a86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea8eeafe5c9945be133d72e37b11b93c5386afa98fdd9dcffe37290a417a86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea8eeafe5c9945be133d72e37b11b93c5386afa98fdd9dcffe37290a417a86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:33 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7ea8eeafe5c9945be133d72e37b11b93c5386afa98fdd9dcffe37290a417a86/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:33 compute-2 podman[8110]: 2025-10-09 10:59:33.072592307 +0000 UTC m=+0.116459042 container init fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:33 compute-2 podman[8110]: 2025-10-09 10:59:32.982103797 +0000 UTC m=+0.025970542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:33 compute-2 podman[8110]: 2025-10-09 10:59:33.080550017 +0000 UTC m=+0.124416742 container start fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 09 10:59:33 compute-2 podman[8110]: 2025-10-09 10:59:33.083646552 +0000 UTC m=+0.127513297 container attach fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:33 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test[8126]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 09 10:59:33 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test[8126]:                             [--no-systemd] [--no-tmpfs]
Oct 09 10:59:33 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test[8126]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 09 10:59:33 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e23 e23: 3 total, 2 up, 3 in
Oct 09 10:59:33 compute-2 systemd[1]: libpod-fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f.scope: Deactivated successfully.
Oct 09 10:59:33 compute-2 ceph-mon[6044]: Deploying daemon osd.2 on compute-2
Oct 09 10:59:33 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2976644364' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 09 10:59:33 compute-2 ceph-mon[6044]: osdmap e22: 3 total, 2 up, 3 in
Oct 09 10:59:33 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:33 compute-2 ceph-mon[6044]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:33 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2365709173' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 09 10:59:33 compute-2 podman[8131]: 2025-10-09 10:59:33.298931396 +0000 UTC m=+0.024496212 container died fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 09 10:59:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-a7ea8eeafe5c9945be133d72e37b11b93c5386afa98fdd9dcffe37290a417a86-merged.mount: Deactivated successfully.
Oct 09 10:59:33 compute-2 podman[8131]: 2025-10-09 10:59:33.347067399 +0000 UTC m=+0.072632235 container remove fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 10:59:33 compute-2 systemd[1]: libpod-conmon-fdfc674b0115babafa58166ce866405d1f3ede5bd56dc00a89bb49cf03d08c0f.scope: Deactivated successfully.
Oct 09 10:59:33 compute-2 systemd[1]: Reloading.
Oct 09 10:59:33 compute-2 systemd-sysv-generator[8194]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:33 compute-2 systemd-rc-local-generator[8191]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:33 compute-2 systemd[1]: Reloading.
Oct 09 10:59:33 compute-2 systemd-sysv-generator[8236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:33 compute-2 systemd-rc-local-generator[8232]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:34 compute-2 systemd[1]: Starting Ceph osd.2 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 10:59:34 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2365709173' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 09 10:59:34 compute-2 ceph-mon[6044]: osdmap e23: 3 total, 2 up, 3 in
Oct 09 10:59:34 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:34 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/700720401' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 09 10:59:34 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct 09 10:59:34 compute-2 podman[8288]: 2025-10-09 10:59:34.366320248 +0000 UTC m=+0.044824942 container create 62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 09 10:59:34 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b116af55e72ee87c90c578a943988bf6b36ad735e19aa1b45bf7a8fb58a802f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b116af55e72ee87c90c578a943988bf6b36ad735e19aa1b45bf7a8fb58a802f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b116af55e72ee87c90c578a943988bf6b36ad735e19aa1b45bf7a8fb58a802f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b116af55e72ee87c90c578a943988bf6b36ad735e19aa1b45bf7a8fb58a802f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b116af55e72ee87c90c578a943988bf6b36ad735e19aa1b45bf7a8fb58a802f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:34 compute-2 podman[8288]: 2025-10-09 10:59:34.433993684 +0000 UTC m=+0.112498428 container init 62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 09 10:59:34 compute-2 podman[8288]: 2025-10-09 10:59:34.44331451 +0000 UTC m=+0.121819194 container start 62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:34 compute-2 podman[8288]: 2025-10-09 10:59:34.350501101 +0000 UTC m=+0.029005795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:34 compute-2 podman[8288]: 2025-10-09 10:59:34.447044647 +0000 UTC m=+0.125549361 container attach 62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 09 10:59:34 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:34 compute-2 bash[8288]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:34 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:34 compute-2 bash[8288]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:35 compute-2 lvm[8384]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:59:35 compute-2 lvm[8384]: VG ceph_vg0 finished
Oct 09 10:59:35 compute-2 lvm[8388]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:59:35 compute-2 lvm[8388]: VG ceph_vg0 finished
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:35 compute-2 bash[8288]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 10:59:35 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/700720401' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 09 10:59:35 compute-2 ceph-mon[6044]: osdmap e24: 3 total, 2 up, 3 in
Oct 09 10:59:35 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:35 compute-2 ceph-mon[6044]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:35 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 10:59:35 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2692541067' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 09 10:59:35 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 10:59:35 compute-2 bash[8288]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 10:59:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate[8303]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 09 10:59:35 compute-2 bash[8288]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 09 10:59:35 compute-2 systemd[1]: libpod-62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a.scope: Deactivated successfully.
Oct 09 10:59:35 compute-2 podman[8288]: 2025-10-09 10:59:35.69635772 +0000 UTC m=+1.374862424 container died 62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Oct 09 10:59:35 compute-2 systemd[1]: libpod-62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a.scope: Consumed 1.463s CPU time.
Oct 09 10:59:35 compute-2 systemd[1]: var-lib-containers-storage-overlay-4b116af55e72ee87c90c578a943988bf6b36ad735e19aa1b45bf7a8fb58a802f-merged.mount: Deactivated successfully.
Oct 09 10:59:35 compute-2 podman[8288]: 2025-10-09 10:59:35.746845192 +0000 UTC m=+1.425349886 container remove 62e324658bb80582b52f28b8e84252670edd843de32bc4a2df28f7ea6577da8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:35 compute-2 podman[8555]: 2025-10-09 10:59:35.98081816 +0000 UTC m=+0.047803633 container create 2aae956e62a48e04af25a9c0a7bff2788296a9e4c6ff0e6f96ce5fe020e4a8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 09 10:59:36 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e25 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 10:59:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccfb6df48a79acc2b3e8d526318c45272259c628359c239e01bb18337e5875a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccfb6df48a79acc2b3e8d526318c45272259c628359c239e01bb18337e5875a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccfb6df48a79acc2b3e8d526318c45272259c628359c239e01bb18337e5875a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccfb6df48a79acc2b3e8d526318c45272259c628359c239e01bb18337e5875a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:36 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccfb6df48a79acc2b3e8d526318c45272259c628359c239e01bb18337e5875a/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:36 compute-2 podman[8555]: 2025-10-09 10:59:36.035131093 +0000 UTC m=+0.102116636 container init 2aae956e62a48e04af25a9c0a7bff2788296a9e4c6ff0e6f96ce5fe020e4a8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Oct 09 10:59:36 compute-2 podman[8555]: 2025-10-09 10:59:36.047056967 +0000 UTC m=+0.114042470 container start 2aae956e62a48e04af25a9c0a7bff2788296a9e4c6ff0e6f96ce5fe020e4a8fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 09 10:59:36 compute-2 bash[8555]: 2aae956e62a48e04af25a9c0a7bff2788296a9e4c6ff0e6f96ce5fe020e4a8fd
Oct 09 10:59:36 compute-2 podman[8555]: 2025-10-09 10:59:35.957338024 +0000 UTC m=+0.024323577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:36 compute-2 systemd[1]: Started Ceph osd.2 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 10:59:36 compute-2 ceph-osd[8575]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 10:59:36 compute-2 ceph-osd[8575]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 09 10:59:36 compute-2 ceph-osd[8575]: pidfile_write: ignore empty --pid-file
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:36 compute-2 sudo[7998]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:36 compute-2 sudo[8587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:36 compute-2 sudo[8587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:36 compute-2 sudo[8587]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:36 compute-2 sudo[8612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -- raw list --format json
Oct 09 10:59:36 compute-2 sudo[8612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:36 compute-2 ceph-mon[6044]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 10:59:36 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 09 10:59:36 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2692541067' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 09 10:59:36 compute-2 ceph-mon[6044]: osdmap e25: 3 total, 2 up, 3 in
Oct 09 10:59:36 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:36 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 10:59:36 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:36 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:36 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaef800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaef800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaef800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaef800 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653daaefc00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:36 compute-2 podman[8686]: 2025-10-09 10:59:36.650861732 +0000 UTC m=+0.018983505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:36 compute-2 podman[8686]: 2025-10-09 10:59:36.79022017 +0000 UTC m=+0.158341923 container create 1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 09 10:59:36 compute-2 ceph-osd[8575]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 09 10:59:36 compute-2 ceph-osd[8575]: load: jerasure load: lrc 
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 10:59:36 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:37 compute-2 systemd[1]: Started libpod-conmon-1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f.scope.
Oct 09 10:59:37 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:37 compute-2 podman[8686]: 2025-10-09 10:59:37.418824166 +0000 UTC m=+0.786945939 container init 1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sinoussi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 09 10:59:37 compute-2 podman[8686]: 2025-10-09 10:59:37.430900215 +0000 UTC m=+0.799021998 container start 1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sinoussi, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:37 compute-2 systemd[1]: libpod-1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f.scope: Deactivated successfully.
Oct 09 10:59:37 compute-2 amazing_sinoussi[8709]: 167 167
Oct 09 10:59:37 compute-2 conmon[8709]: conmon 1953e3030b498d19eba4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f.scope/container/memory.events
Oct 09 10:59:37 compute-2 ceph-osd[8575]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 09 10:59:37 compute-2 ceph-osd[8575]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:37 compute-2 podman[8686]: 2025-10-09 10:59:37.60053078 +0000 UTC m=+0.968652563 container attach 1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sinoussi, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:37 compute-2 podman[8686]: 2025-10-09 10:59:37.601598696 +0000 UTC m=+0.969720479 container died 1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 09 10:59:37 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct 09 10:59:37 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 09 10:59:37 compute-2 ceph-mon[6044]: osdmap e26: 3 total, 2 up, 3 in
Oct 09 10:59:37 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:37 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 10:59:37 compute-2 ceph-mon[6044]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:37 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:37 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:37 compute-2 systemd[1]: var-lib-containers-storage-overlay-77988d2fd71efe3ae918cd675fcc437452602d9c43f58af85dff1fcf61e1b204-merged.mount: Deactivated successfully.
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:37 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:37 compute-2 podman[8686]: 2025-10-09 10:59:37.924305205 +0000 UTC m=+1.292426968 container remove 1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 10:59:38 compute-2 systemd[1]: libpod-conmon-1953e3030b498d19eba435fa8c5d78c79210f8f8c6e398b3814c172fa27f586f.scope: Deactivated successfully.
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db964c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db965000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db965000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db965000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount shared_bdev_used = 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: RocksDB version: 7.9.2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Git sha 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: DB SUMMARY
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: DB Session ID:  0H2Y6P5A6IPK84AU71TP
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: CURRENT file:  CURRENT
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.error_if_exists: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.create_if_missing: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                     Options.env: 0x5653dab436c0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                Options.info_log: 0x5653db9696c0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                              Options.statistics: (nil)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.use_fsync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                              Options.db_log_dir: 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                 Options.wal_dir: db.wal
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.write_buffer_manager: 0x5653dba5aa00
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.unordered_write: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.row_cache: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                              Options.wal_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.two_write_queues: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.wal_compression: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.atomic_flush: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_background_jobs: 4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_background_compactions: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_subcompactions: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.max_open_files: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Compression algorithms supported:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kZSTD supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kXpressCompression supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kBZip2Compression supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kLZ4Compression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kZlibCompression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kSnappyCompression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969a80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969a80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969a80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969a80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969a80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969a80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969a80)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969aa0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab849b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969aa0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab849b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db969aa0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab849b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: aa2cf8f2-a975-4fbe-9ff2-c6e08dcc0343
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007578121787, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007578121934, "job": 1, "event": "recovery_finished"}
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: freelist init
Oct 09 10:59:38 compute-2 ceph-osd[8575]: freelist _read_cfg
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs umount
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db965000 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 10:59:38 compute-2 podman[8751]: 2025-10-09 10:59:38.076790288 +0000 UTC m=+0.024513533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:38 compute-2 podman[8751]: 2025-10-09 10:59:38.230521763 +0000 UTC m=+0.178245008 container create b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kilby, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:38 compute-2 systemd[1]: Started libpod-conmon-b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949.scope.
Oct 09 10:59:38 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d06a3b98e3d258d9c5d538b4fa66abf2c76cd854c65c666aa421d03110e7e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d06a3b98e3d258d9c5d538b4fa66abf2c76cd854c65c666aa421d03110e7e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d06a3b98e3d258d9c5d538b4fa66abf2c76cd854c65c666aa421d03110e7e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73d06a3b98e3d258d9c5d538b4fa66abf2c76cd854c65c666aa421d03110e7e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:38 compute-2 podman[8751]: 2025-10-09 10:59:38.305033081 +0000 UTC m=+0.252756346 container init b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kilby, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:38 compute-2 podman[8751]: 2025-10-09 10:59:38.311057425 +0000 UTC m=+0.258780670 container start b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kilby, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 09 10:59:38 compute-2 podman[8751]: 2025-10-09 10:59:38.31413246 +0000 UTC m=+0.261855705 container attach b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kilby, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db965000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db965000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bdev(0x5653db965000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluefs mount shared_bdev_used = 4718592
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: RocksDB version: 7.9.2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Git sha 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: DB SUMMARY
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: DB Session ID:  0H2Y6P5A6IPK84AU71TO
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: CURRENT file:  CURRENT
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.error_if_exists: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.create_if_missing: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                     Options.env: 0x5653dab431f0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                Options.info_log: 0x5653db969860
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                              Options.statistics: (nil)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.use_fsync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                              Options.db_log_dir: 
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                                 Options.wal_dir: db.wal
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.write_buffer_manager: 0x5653dba5aa00
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.unordered_write: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.row_cache: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                              Options.wal_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.two_write_queues: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.wal_compression: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.atomic_flush: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_background_jobs: 4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_background_compactions: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_subcompactions: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.max_open_files: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Compression algorithms supported:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kZSTD supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kXpressCompression supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kBZip2Compression supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kLZ4Compression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kZlibCompression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         kSnappyCompression supported: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9695a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9695a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9695a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9695a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9695a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9695a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9695a0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab85350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 483183820
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9699e0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab849b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9699e0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab849b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:           Options.merge_operator: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5653db9699e0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5653dab849b0
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.compression: LZ4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.num_levels: 7
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.bloom_locality: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                               Options.ttl: 2592000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                       Options.enable_blob_files: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                           Options.min_blob_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: aa2cf8f2-a975-4fbe-9ff2-c6e08dcc0343
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007578399925, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007578403270, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007578, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "aa2cf8f2-a975-4fbe-9ff2-c6e08dcc0343", "db_session_id": "0H2Y6P5A6IPK84AU71TO", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007578410080, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007578, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "aa2cf8f2-a975-4fbe-9ff2-c6e08dcc0343", "db_session_id": "0H2Y6P5A6IPK84AU71TO", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007578412914, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007578, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "aa2cf8f2-a975-4fbe-9ff2-c6e08dcc0343", "db_session_id": "0H2Y6P5A6IPK84AU71TO", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007578414256, "job": 1, "event": "recovery_finished"}
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5653dbb3e000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: DB pointer 0x5653dbcb2000
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 09 10:59:38 compute-2 ceph-osd[8575]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:59:38 compute-2 ceph-osd[8575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
                                          
                                          ** Compaction Stats [m-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-0] **
                                          
                                          ** Compaction Stats [m-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-1] **
                                          
                                          ** Compaction Stats [m-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [m-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [m-2] **
                                          
                                          ** Compaction Stats [p-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-0] **
                                          
                                          ** Compaction Stats [p-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-1] **
                                          
                                          ** Compaction Stats [p-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [p-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [p-2] **
                                          
                                          ** Compaction Stats [O-0] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-0] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab849b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-0] **
                                          
                                          ** Compaction Stats [O-1] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-1] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab849b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-1] **
                                          
                                          ** Compaction Stats [O-2] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [O-2] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab849b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [O-2] **
                                          
                                          ** Compaction Stats [L] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [L] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [L] **
                                          
                                          ** Compaction Stats [P] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [P] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.1 total, 0.1 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5653dab85350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.8e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [P] **
Oct 09 10:59:38 compute-2 ceph-osd[8575]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 09 10:59:38 compute-2 ceph-osd[8575]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 09 10:59:38 compute-2 ceph-osd[8575]: _get_class not permitted to load lua
Oct 09 10:59:38 compute-2 ceph-osd[8575]: _get_class not permitted to load sdk
Oct 09 10:59:38 compute-2 ceph-osd[8575]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 09 10:59:38 compute-2 ceph-osd[8575]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 09 10:59:38 compute-2 ceph-osd[8575]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 09 10:59:38 compute-2 ceph-osd[8575]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 09 10:59:38 compute-2 ceph-osd[8575]: osd.2 0 load_pgs
Oct 09 10:59:38 compute-2 ceph-osd[8575]: osd.2 0 load_pgs opened 0 pgs
Oct 09 10:59:38 compute-2 ceph-osd[8575]: osd.2 0 log_to_monitors true
Oct 09 10:59:38 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2[8571]: 2025-10-09T10:59:38.454+0000 7f2267290740 -1 osd.2 0 log_to_monitors true
Oct 09 10:59:38 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Oct 09 10:59:38 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct 09 10:59:38 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Oct 09 10:59:38 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 09 10:59:38 compute-2 ceph-mon[6044]: Cluster is now healthy
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 10:59:38 compute-2 ceph-mon[6044]: osdmap e27: 3 total, 2 up, 3 in
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 09 10:59:38 compute-2 ceph-mon[6044]: osdmap e28: 3 total, 2 up, 3 in
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 09 10:59:38 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 10:59:38 compute-2 lvm[9243]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:59:38 compute-2 lvm[9243]: VG ceph_vg0 finished
Oct 09 10:59:38 compute-2 unruffled_kilby[8954]: {}
Oct 09 10:59:38 compute-2 systemd[1]: libpod-b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949.scope: Deactivated successfully.
Oct 09 10:59:38 compute-2 podman[8751]: 2025-10-09 10:59:38.948219851 +0000 UTC m=+0.895943096 container died b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 09 10:59:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-73d06a3b98e3d258d9c5d538b4fa66abf2c76cd854c65c666aa421d03110e7e2-merged.mount: Deactivated successfully.
Oct 09 10:59:38 compute-2 podman[8751]: 2025-10-09 10:59:38.986263711 +0000 UTC m=+0.933986956 container remove b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:59:38 compute-2 systemd[1]: libpod-conmon-b27ebd0c958df9cc20351174a7e22977568d17956bdc4dfc07bff244e7862949.scope: Deactivated successfully.
Oct 09 10:59:39 compute-2 sudo[8612]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:39 compute-2 sudo[9258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:59:39 compute-2 sudo[9258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:39 compute-2 sudo[9258]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:39 compute-2 sudo[9283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:39 compute-2 sudo[9283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:39 compute-2 sudo[9283]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:39 compute-2 sudo[9308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 10:59:39 compute-2 sudo[9308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:39 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 09 10:59:39 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 09 10:59:39 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct 09 10:59:39 compute-2 ceph-osd[8575]: osd.2 0 done with init, starting boot process
Oct 09 10:59:39 compute-2 ceph-osd[8575]: osd.2 0 start_boot
Oct 09 10:59:39 compute-2 ceph-osd[8575]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 09 10:59:39 compute-2 ceph-osd[8575]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 09 10:59:39 compute-2 ceph-osd[8575]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 09 10:59:39 compute-2 ceph-osd[8575]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 09 10:59:39 compute-2 ceph-osd[8575]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 09 10:59:39 compute-2 ceph-mon[6044]: pgmap v81: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2229510725' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2229510725' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 10:59:39 compute-2 ceph-mon[6044]: osdmap e29: 3 total, 2 up, 3 in
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 10:59:39 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:39 compute-2 podman[9401]: 2025-10-09 10:59:39.909329957 +0000 UTC m=+0.067265033 container exec 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct 09 10:59:40 compute-2 podman[9401]: 2025-10-09 10:59:40.001523715 +0000 UTC m=+0.159458761 container exec_died 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:59:40 compute-2 sudo[9308]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:40 compute-2 sudo[9484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:40 compute-2 sudo[9484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:40 compute-2 sudo[9484]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:40 compute-2 sudo[9509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid e990987d-9393-5e96-99ae-9e3a3319f191 -- inventory --format=json-pretty --filter-for-batch
Oct 09 10:59:40 compute-2 sudo[9509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:40 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct 09 10:59:40 compute-2 ceph-mon[6044]: 2.1c scrub starts
Oct 09 10:59:40 compute-2 ceph-mon[6044]: 2.1c scrub ok
Oct 09 10:59:40 compute-2 ceph-mon[6044]: 3.1f scrub starts
Oct 09 10:59:40 compute-2 ceph-mon[6044]: 3.1f scrub ok
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2767728598' entity='client.admin' 
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:59:40 compute-2 ceph-mon[6044]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:40 compute-2 ceph-mon[6044]: Saving service ingress.rgw.default spec with placement count:2
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 09 10:59:40 compute-2 ceph-mon[6044]: osdmap e30: 3 total, 2 up, 3 in
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:40 compute-2 ceph-mon[6044]: pgmap v84: 131 pgs: 1 peering, 93 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:40 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:40 compute-2 podman[9572]: 2025-10-09 10:59:40.902593194 +0000 UTC m=+0.053967242 container create a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:40 compute-2 systemd[1]: Started libpod-conmon-a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc.scope.
Oct 09 10:59:40 compute-2 podman[9572]: 2025-10-09 10:59:40.870881268 +0000 UTC m=+0.022255336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:40 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:40 compute-2 podman[9572]: 2025-10-09 10:59:40.995945471 +0000 UTC m=+0.147319539 container init a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:41 compute-2 podman[9572]: 2025-10-09 10:59:41.002002687 +0000 UTC m=+0.153376735 container start a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:41 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 10:59:41 compute-2 determined_nobel[9588]: 167 167
Oct 09 10:59:41 compute-2 systemd[1]: libpod-a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc.scope: Deactivated successfully.
Oct 09 10:59:41 compute-2 podman[9572]: 2025-10-09 10:59:41.013556359 +0000 UTC m=+0.164930407 container attach a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_nobel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:41 compute-2 podman[9572]: 2025-10-09 10:59:41.015037339 +0000 UTC m=+0.166411387 container died a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 10:59:41 compute-2 systemd[1]: var-lib-containers-storage-overlay-0cfdf348d79c8e362609e79691b54bc0f8a2603cf1958b3c826ab53a0b282fed-merged.mount: Deactivated successfully.
Oct 09 10:59:41 compute-2 podman[9572]: 2025-10-09 10:59:41.07608292 +0000 UTC m=+0.227456968 container remove a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_nobel, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:41 compute-2 systemd[1]: libpod-conmon-a8cb92cabe50a65a06d5dd56f2ae07181e0482a091fd3d02f50892cf440792fc.scope: Deactivated successfully.
Oct 09 10:59:41 compute-2 podman[9614]: 2025-10-09 10:59:41.251594294 +0000 UTC m=+0.051877621 container create ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 09 10:59:41 compute-2 systemd[1]: Started libpod-conmon-ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec.scope.
Oct 09 10:59:41 compute-2 podman[9614]: 2025-10-09 10:59:41.224888808 +0000 UTC m=+0.025172105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:41 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c654f1a5132c8c1ce2e8bf04c59b03e3fc627340bb7ae9cc6a7438f7bc4f35d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c654f1a5132c8c1ce2e8bf04c59b03e3fc627340bb7ae9cc6a7438f7bc4f35d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c654f1a5132c8c1ce2e8bf04c59b03e3fc627340bb7ae9cc6a7438f7bc4f35d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:41 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c654f1a5132c8c1ce2e8bf04c59b03e3fc627340bb7ae9cc6a7438f7bc4f35d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:41 compute-2 podman[9614]: 2025-10-09 10:59:41.356887837 +0000 UTC m=+0.157171144 container init ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lalande, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:41 compute-2 podman[9614]: 2025-10-09 10:59:41.366369078 +0000 UTC m=+0.166652355 container start ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:41 compute-2 podman[9614]: 2025-10-09 10:59:41.386854273 +0000 UTC m=+0.187137560 container attach ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lalande, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:41 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct 09 10:59:41 compute-2 ceph-mon[6044]: purged_snaps scrub starts
Oct 09 10:59:41 compute-2 ceph-mon[6044]: purged_snaps scrub ok
Oct 09 10:59:41 compute-2 ceph-mon[6044]: 2.1f scrub starts
Oct 09 10:59:41 compute-2 ceph-mon[6044]: 2.1f scrub ok
Oct 09 10:59:41 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:41 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:41 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 10:59:41 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 10:59:41 compute-2 ceph-mon[6044]: osdmap e31: 3 total, 2 up, 3 in
Oct 09 10:59:41 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]: [
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:     {
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "available": false,
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "being_replaced": false,
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "ceph_device_lvm": false,
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "lsm_data": {},
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "lvs": [],
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "path": "/dev/sr0",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "rejected_reasons": [
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "Insufficient space (<5GB)",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "Has a FileSystem"
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         ],
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         "sys_api": {
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "actuators": null,
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "device_nodes": [
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:                 "sr0"
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             ],
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "devname": "sr0",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "human_readable_size": "482.00 KB",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "id_bus": "ata",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "model": "QEMU DVD-ROM",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "nr_requests": "2",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "parent": "/dev/sr0",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "partitions": {},
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "path": "/dev/sr0",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "removable": "1",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "rev": "2.5+",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "ro": "0",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "rotational": "0",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "sas_address": "",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "sas_device_handle": "",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "scheduler_mode": "mq-deadline",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "sectors": 0,
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "sectorsize": "2048",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "size": 493568.0,
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "support_discard": "2048",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "type": "disk",
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:             "vendor": "QEMU"
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:         }
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]:     }
Oct 09 10:59:42 compute-2 flamboyant_lalande[9631]: ]
Oct 09 10:59:42 compute-2 systemd[1]: libpod-ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec.scope: Deactivated successfully.
Oct 09 10:59:42 compute-2 podman[9614]: 2025-10-09 10:59:42.116739475 +0000 UTC m=+0.917022802 container died ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:59:42 compute-2 systemd[1]: var-lib-containers-storage-overlay-c654f1a5132c8c1ce2e8bf04c59b03e3fc627340bb7ae9cc6a7438f7bc4f35d2-merged.mount: Deactivated successfully.
Oct 09 10:59:42 compute-2 podman[9614]: 2025-10-09 10:59:42.177124764 +0000 UTC m=+0.977408051 container remove ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_lalande, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:59:42 compute-2 systemd[1]: libpod-conmon-ced440d7b5fd141408e99aa2e9e14c039702c53342822e5c585d668fbde99cec.scope: Deactivated successfully.
Oct 09 10:59:42 compute-2 sudo[9509]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[10793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 10:59:42 compute-2 sudo[10793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10793]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[10818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 10:59:42 compute-2 sudo[10818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10818]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[10843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:42 compute-2 sudo[10843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10843]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[10868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:42 compute-2 sudo[10868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10868]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.504 iops: 7809.019 elapsed_sec: 0.384
Oct 09 10:59:42 compute-2 ceph-osd[8575]: log_channel(cluster) log [WRN] : OSD bench result of 7809.019048 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 0 waiting for initial osdmap
Oct 09 10:59:42 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2[8571]: 2025-10-09T10:59:42.485+0000 7f2263213640 -1 osd.2 0 waiting for initial osdmap
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 31 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 31 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 31 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 31 check_osdmap_features require_osd_release unknown -> squid
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 31 set_numa_affinity not setting numa affinity
Oct 09 10:59:42 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-osd-2[8571]: 2025-10-09T10:59:42.512+0000 7f225e83b640 -1 osd.2 31 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 31 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 09 10:59:42 compute-2 sudo[10893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:42 compute-2 sudo[10893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10893]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[10941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:42 compute-2 sudo[10941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10941]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[10966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 10:59:42 compute-2 sudo[10966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10966]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[10991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 10:59:42 compute-2 sudo[10991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[10991]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e32 e32: 3 total, 3 up, 3 in
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 32 state: booting -> active
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.19( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.18( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.1d( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.1b( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.1c( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.1b( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.1a( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.1d( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.1f( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.1a( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.1e( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.1c( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.f( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.e( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.2( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.9( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.4( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.3( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.2( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.1( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.8( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.4( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.7( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.5( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.6( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.5( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.7( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.1( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.6( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.3( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.c( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.b( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.d( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.c( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.a( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.d( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.b( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.a( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.8( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.e( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.f( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.9( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.16( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.11( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.10( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.12( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.14( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.13( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.17( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.15( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.14( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.15( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.13( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.12( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.16( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.17( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.11( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.10( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.18( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.1e( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[3.19( empty local-lis/les=0/0 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 32 pg[5.1f( empty local-lis/les=0/0 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:42 compute-2 ceph-mon[6044]: 2.1d scrub starts
Oct 09 10:59:42 compute-2 ceph-mon[6044]: 2.1d scrub ok
Oct 09 10:59:42 compute-2 ceph-mon[6044]: 4.1e scrub starts
Oct 09 10:59:42 compute-2 ceph-mon[6044]: 4.1e scrub ok
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Adjusting osd_memory_target on compute-2 to 127.8M
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Unable to set osd_memory_target on compute-2 to 134060032: error parsing value: Value '134060032' is below minimum 939524096
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Saving service node-exporter spec with placement *
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Saving service grafana spec with placement compute-0;count:1
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Saving service prometheus spec with placement compute-0;count:1
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:42 compute-2 ceph-mon[6044]: Saving service alertmanager spec with placement compute-0;count:1
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:42 compute-2 ceph-mon[6044]: pgmap v86: 193 pgs: 1 peering, 155 unknown, 37 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 10:59:42 compute-2 ceph-mon[6044]: osd.2 [v2:192.168.122.102:6800/1848230378,v1:192.168.122.102:6801/1848230378] boot
Oct 09 10:59:42 compute-2 ceph-mon[6044]: osdmap e32: 3 total, 3 up, 3 in
Oct 09 10:59:42 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 10:59:42 compute-2 sudo[11016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 10:59:42 compute-2 sudo[11016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[11016]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[11041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 10:59:42 compute-2 sudo[11041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[11041]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[11066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:42 compute-2 sudo[11066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[11066]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:42 compute-2 sudo[11091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:42 compute-2 sudo[11091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:42 compute-2 sudo[11091]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:43 compute-2 sudo[11116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:43 compute-2 sudo[11116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:43 compute-2 sudo[11116]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:43 compute-2 sudo[11164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:43 compute-2 sudo[11164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:43 compute-2 sudo[11164]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:43 compute-2 sudo[11189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 10:59:43 compute-2 sudo[11189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:43 compute-2 sudo[11189]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:43 compute-2 sudo[11214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 10:59:43 compute-2 sudo[11214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:43 compute-2 sudo[11214]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:43 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e33 e33: 3 total, 3 up, 3 in
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.1a( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.1a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.1c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.1b( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.1b( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.9( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.1c( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.18( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.1d( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.19( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.1e( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.1e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.1f( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.19( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.1f( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.7( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.18( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.1( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.0( empty local-lis/les=32/33 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.f( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.3( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.5( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.4( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.2( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.6( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.8( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.5( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.3( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.4( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.7( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.1( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.2( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.6( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.0( empty local-lis/les=32/33 n=0 ec=14/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.a( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.c( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.b( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.b( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.8( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.e( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.f( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.d( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.10( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.16( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.17( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.12( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.11( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.14( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.14( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.12( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.13( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.15( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.13( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[5.10( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=16/16 les/c/f=17/17/0 sis=32) [2] r=0 lpr=32 pi=[16,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.16( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 33 pg[3.17( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=27/27 les/c/f=28/28/0 sis=32) [2] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:43 compute-2 ceph-mon[6044]: 2.1e scrub starts
Oct 09 10:59:43 compute-2 ceph-mon[6044]: 2.1e scrub ok
Oct 09 10:59:43 compute-2 ceph-mon[6044]: 4.11 scrub starts
Oct 09 10:59:43 compute-2 ceph-mon[6044]: 4.11 scrub ok
Oct 09 10:59:43 compute-2 ceph-mon[6044]: OSD bench result of 7809.019048 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 10:59:43 compute-2 ceph-mon[6044]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 10:59:43 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 10:59:43 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:43 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1352278890' entity='client.admin' 
Oct 09 10:59:43 compute-2 ceph-mon[6044]: osdmap e33: 3 total, 3 up, 3 in
Oct 09 10:59:44 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Oct 09 10:59:44 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Oct 09 10:59:44 compute-2 ceph-mon[6044]: 2.9 scrub starts
Oct 09 10:59:44 compute-2 ceph-mon[6044]: 2.9 scrub ok
Oct 09 10:59:44 compute-2 ceph-mon[6044]: 4.1f scrub starts
Oct 09 10:59:44 compute-2 ceph-mon[6044]: 4.1f scrub ok
Oct 09 10:59:44 compute-2 ceph-mon[6044]: 5.1d scrub starts
Oct 09 10:59:44 compute-2 ceph-mon[6044]: 5.1d scrub ok
Oct 09 10:59:44 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/36201585' entity='client.admin' 
Oct 09 10:59:44 compute-2 ceph-mon[6044]: pgmap v89: 193 pgs: 65 peering, 62 unknown, 66 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 09 10:59:45 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 09 10:59:45 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 09 10:59:45 compute-2 ceph-mon[6044]: 2.a scrub starts
Oct 09 10:59:45 compute-2 ceph-mon[6044]: 2.a scrub ok
Oct 09 10:59:45 compute-2 ceph-mon[6044]: 4.12 scrub starts
Oct 09 10:59:45 compute-2 ceph-mon[6044]: 4.12 scrub ok
Oct 09 10:59:45 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1390179226' entity='client.admin' 
Oct 09 10:59:45 compute-2 ceph-mon[6044]: 3.1a scrub starts
Oct 09 10:59:45 compute-2 ceph-mon[6044]: 3.1a scrub ok
Oct 09 10:59:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 10:59:46 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 09 10:59:46 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 09 10:59:47 compute-2 ceph-mon[6044]: 2.8 scrub starts
Oct 09 10:59:47 compute-2 ceph-mon[6044]: 2.8 scrub ok
Oct 09 10:59:47 compute-2 ceph-mon[6044]: 4.13 scrub starts
Oct 09 10:59:47 compute-2 ceph-mon[6044]: 4.13 scrub ok
Oct 09 10:59:47 compute-2 ceph-mon[6044]: 5.1a scrub starts
Oct 09 10:59:47 compute-2 ceph-mon[6044]: 5.1a scrub ok
Oct 09 10:59:47 compute-2 ceph-mon[6044]: pgmap v90: 193 pgs: 64 peering, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 09 10:59:47 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Oct 09 10:59:47 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Oct 09 10:59:47 compute-2 sudo[11239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:59:47 compute-2 sudo[11239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:47 compute-2 sudo[11239]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:47 compute-2 sudo[11264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 10:59:47 compute-2 sudo[11264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 2.7 deep-scrub starts
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 2.7 deep-scrub ok
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 4.14 scrub starts
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 4.14 scrub ok
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 2.6 scrub starts
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 2.6 scrub ok
Oct 09 10:59:48 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1920736686' entity='client.admin' 
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 5.1c deep-scrub starts
Oct 09 10:59:48 compute-2 ceph-mon[6044]: 5.1c deep-scrub ok
Oct 09 10:59:48 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:48 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:48 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.klwwrz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 10:59:48 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.klwwrz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 10:59:48 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:48 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:48 compute-2 podman[11332]: 2025-10-09 10:59:48.24824987 +0000 UTC m=+0.036643524 container create 1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_pike, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:48 compute-2 systemd[1]: Started libpod-conmon-1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92.scope.
Oct 09 10:59:48 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:59:48 compute-2 podman[11332]: 2025-10-09 10:59:48.297128348 +0000 UTC m=+0.085522012 container init 1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 10:59:48 compute-2 podman[11332]: 2025-10-09 10:59:48.302675657 +0000 UTC m=+0.091069311 container start 1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:59:48 compute-2 great_pike[11348]: 167 167
Oct 09 10:59:48 compute-2 systemd[1]: libpod-1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92.scope: Deactivated successfully.
Oct 09 10:59:48 compute-2 conmon[11348]: conmon 1f5e02f84c58c9267ac4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92.scope/container/memory.events
Oct 09 10:59:48 compute-2 podman[11332]: 2025-10-09 10:59:48.30986371 +0000 UTC m=+0.098257394 container attach 1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 09 10:59:48 compute-2 podman[11332]: 2025-10-09 10:59:48.310545184 +0000 UTC m=+0.098938838 container died 1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_pike, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 09 10:59:48 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 09 10:59:48 compute-2 podman[11332]: 2025-10-09 10:59:48.231622376 +0000 UTC m=+0.020016050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:48 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 09 10:59:48 compute-2 systemd[1]: var-lib-containers-storage-overlay-aa9b5c3e160d85bbc586f998760c95f863be4dd336c3cbaf99a167531161feb1-merged.mount: Deactivated successfully.
Oct 09 10:59:48 compute-2 podman[11332]: 2025-10-09 10:59:48.354992991 +0000 UTC m=+0.143386645 container remove 1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_pike, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 09 10:59:48 compute-2 systemd[1]: libpod-conmon-1f5e02f84c58c9267ac4ebc6bd7e8455276922d32ca7fd68e1899fe7826eec92.scope: Deactivated successfully.
Oct 09 10:59:48 compute-2 systemd[1]: Reloading.
Oct 09 10:59:48 compute-2 systemd-rc-local-generator[11392]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:48 compute-2 systemd-sysv-generator[11395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:48 compute-2 systemd[1]: Reloading.
Oct 09 10:59:48 compute-2 systemd-rc-local-generator[11434]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 10:59:48 compute-2 systemd-sysv-generator[11437]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 10:59:48 compute-2 systemd[1]: Starting Ceph rgw.rgw.compute-2.klwwrz for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 10:59:48 compute-2 sudo[11469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhghzrspjwgawcrffenwgeqjmxbrzcc ; /usr/bin/python3'
Oct 09 10:59:48 compute-2 sudo[11469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:59:49 compute-2 python3[11472]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 10:59:49 compute-2 sudo[11469]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:49 compute-2 podman[11518]: 2025-10-09 10:59:49.13636571 +0000 UTC m=+0.049078106 container create f134812f79c2c84eb2a3720f27359a1bc1c760fe615f9a1d9b8aa0468869f338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-2-klwwrz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 10:59:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2eb904c589aac8d04026d12459664f9c4993112e821a1d156017548cf92627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2eb904c589aac8d04026d12459664f9c4993112e821a1d156017548cf92627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2eb904c589aac8d04026d12459664f9c4993112e821a1d156017548cf92627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:49 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d2eb904c589aac8d04026d12459664f9c4993112e821a1d156017548cf92627/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.klwwrz supports timestamps until 2038 (0x7fffffff)
Oct 09 10:59:49 compute-2 podman[11518]: 2025-10-09 10:59:49.198069864 +0000 UTC m=+0.110782270 container init f134812f79c2c84eb2a3720f27359a1bc1c760fe615f9a1d9b8aa0468869f338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-2-klwwrz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:59:49 compute-2 podman[11518]: 2025-10-09 10:59:49.202429112 +0000 UTC m=+0.115141498 container start f134812f79c2c84eb2a3720f27359a1bc1c760fe615f9a1d9b8aa0468869f338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-2-klwwrz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:59:49 compute-2 bash[11518]: f134812f79c2c84eb2a3720f27359a1bc1c760fe615f9a1d9b8aa0468869f338
Oct 09 10:59:49 compute-2 podman[11518]: 2025-10-09 10:59:49.116115573 +0000 UTC m=+0.028827989 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:59:49 compute-2 systemd[1]: Started Ceph rgw.rgw.compute-2.klwwrz for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 10:59:49 compute-2 ceph-mon[6044]: 4.9 scrub starts
Oct 09 10:59:49 compute-2 ceph-mon[6044]: 4.9 scrub ok
Oct 09 10:59:49 compute-2 ceph-mon[6044]: Deploying daemon rgw.rgw.compute-2.klwwrz on compute-2
Oct 09 10:59:49 compute-2 ceph-mon[6044]: 2.2 scrub starts
Oct 09 10:59:49 compute-2 ceph-mon[6044]: 2.2 scrub ok
Oct 09 10:59:49 compute-2 ceph-mon[6044]: 5.1b scrub starts
Oct 09 10:59:49 compute-2 ceph-mon[6044]: 5.1b scrub ok
Oct 09 10:59:49 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3282222437' entity='client.admin' 
Oct 09 10:59:49 compute-2 ceph-mon[6044]: pgmap v91: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 09 10:59:49 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:49 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:49 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:49 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:49 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:49 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 10:59:49 compute-2 radosgw[11550]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 09 10:59:49 compute-2 sudo[11264]: pam_unix(sudo:session): session closed for user root
Oct 09 10:59:49 compute-2 radosgw[11550]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 09 10:59:49 compute-2 radosgw[11550]: framework: beast
Oct 09 10:59:49 compute-2 radosgw[11550]: framework conf key: endpoint, val: 192.168.122.102:8082
Oct 09 10:59:49 compute-2 radosgw[11550]: init_numa not setting numa affinity
Oct 09 10:59:49 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct 09 10:59:49 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct 09 10:59:49 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.1c( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[6.1e( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.19( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[6.1b( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.1d( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.122233391s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367109299s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1f( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.122202873s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367109299s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.122126579s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367166519s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.122068405s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367166519s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.18( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121965408s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367156982s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.18( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121953011s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367156982s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121841431s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367086411s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.7( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121892929s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367147446s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1e( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121809959s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367086411s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.6( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.122027397s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367321014s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.6( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.122010231s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367321014s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.5( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121951103s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367277145s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.7( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121855736s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367147446s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.5( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121938705s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367277145s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.3( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2755107006' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.2( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.19( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121399879s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367126465s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121506691s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367250443s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.3( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121489525s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367250443s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.19( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.121376038s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367126465s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.1( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.6( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.9( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[6.1( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.8( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.14( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.120127678s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367300034s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.2( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.120113373s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367300034s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119736671s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.366975784s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1f( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119881630s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367128372s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.19( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119716644s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.366975784s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1f( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119864464s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367128372s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[6.12( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119792938s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367370605s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.5( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119776726s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367370605s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.3( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119701385s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367380142s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119214058s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.366931915s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.18( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119200706s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.366931915s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1e( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119065285s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.366876602s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1e( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119056702s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.366876602s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.3( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119559288s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367380142s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[6.17( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.2( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119343758s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367439270s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.2( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119333267s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367439270s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118577957s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.366760254s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.1f( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1b( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118559837s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.366760254s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119082451s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367403030s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.7( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.119071007s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367403030s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118940353s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367414474s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118930817s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367414474s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1c( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118348122s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.366861343s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.1c( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118330956s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.366861343s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[6.1c( empty local-lis/les=0/0 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[4.15( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.112989426s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.361881256s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1d( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.112976074s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.361881256s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.112951279s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.361883163s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.4( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118405342s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367292404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.1c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.112933159s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.361883163s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.4( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118332863s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367292404s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118159294s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367198944s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.f( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118143082s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367198944s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118292809s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367448807s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.6( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118279457s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367448807s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.b( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118230820s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367538452s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.b( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118216515s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367538452s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.a( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118148804s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367477417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.a( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118131638s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367477417s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118117332s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367471695s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.c( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118110657s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367504120s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118105888s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367532730s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.c( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118074417s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367504120s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.c( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118094444s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367471695s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.a( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118092537s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367532730s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.d( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118104935s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367584229s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.d( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.118092537s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367584229s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.f( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117955208s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367572784s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117987633s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367612839s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.16( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117975235s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367612839s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117976189s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367624283s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.f( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117932320s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367572784s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.9( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117961884s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367624283s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.10( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117938042s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367609024s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117856026s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367649078s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.12( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117846489s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367656708s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.17( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117843628s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367649078s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117864609s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367685318s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.12( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117827415s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367656708s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.14( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117849350s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367685318s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.10( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117807388s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367609024s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.13( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117802620s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367721558s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.13( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117789268s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367721558s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.14( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117722511s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367685318s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117747307s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367736816s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.14( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117710114s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367685318s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.15( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117733955s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367736816s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117677689s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367782593s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.17( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117743492s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367866516s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.16( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117709160s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367834091s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.10( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117666245s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367782593s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.17( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117731094s) [0] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367866516s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[3.16( empty local-lis/les=32/33 n=0 ec=27/14 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117693901s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367834091s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117607117s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 active pruub 21.367773056s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[5.11( empty local-lis/les=32/33 n=0 ec=29/16 lis/c=32/32 les/c/f=33/33/0 sis=34 pruub=10.117588043s) [1] r=-1 lpr=34 pi=[32,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 21.367773056s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[7.1f( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[7.1d( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[7.16( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[7.11( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[7.14( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[7.a( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[7.5( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:49 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:59:50 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Oct 09 10:59:50 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Oct 09 10:59:50 compute-2 ceph-mon[6044]: 4.17 scrub starts
Oct 09 10:59:50 compute-2 ceph-mon[6044]: 4.17 scrub ok
Oct 09 10:59:50 compute-2 ceph-mon[6044]: 2.4 scrub starts
Oct 09 10:59:50 compute-2 ceph-mon[6044]: 2.4 scrub ok
Oct 09 10:59:50 compute-2 ceph-mon[6044]: 3.9 scrub starts
Oct 09 10:59:50 compute-2 ceph-mon[6044]: 3.9 scrub ok
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vbxein", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.vbxein", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:50 compute-2 ceph-mon[6044]: Deploying daemon rgw.rgw.compute-1.vbxein on compute-1
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 10:59:50 compute-2 ceph-mon[6044]: osdmap e34: 3 total, 3 up, 3 in
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='client.? 192.168.122.102:0/2755107006' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 10:59:50 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1128491331' entity='client.admin' 
Oct 09 10:59:50 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[6.1c( empty local-lis/les=34/35 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.1( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.2( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.1f( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[6.1( empty local-lis/les=34/35 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.3( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[7.1d( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[6.1b( empty local-lis/les=34/35 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.19( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.6( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[7.5( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.1c( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[7.1f( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[6.1e( empty local-lis/les=34/35 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.1d( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[7.a( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.9( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[7.14( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[7.16( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[6.17( empty local-lis/les=34/35 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.15( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.14( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[7.11( empty local-lis/les=34/35 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=34/35 n=0 ec=27/13 lis/c=27/27 les/c/f=28/28/0 sis=34) [2] r=0 lpr=34 pi=[27,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[6.12( empty local-lis/les=34/35 n=0 ec=31/17 lis/c=31/31 les/c/f=33/33/0 sis=34) [2] r=0 lpr=34 pi=[31,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 35 pg[4.8( empty local-lis/les=34/35 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=34) [2] r=0 lpr=34 pi=[29,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:59:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 10:59:51 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 09 10:59:51 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 09 10:59:51 compute-2 ceph-mon[6044]: 4.10 scrub starts
Oct 09 10:59:51 compute-2 ceph-mon[6044]: 4.10 scrub ok
Oct 09 10:59:51 compute-2 ceph-mon[6044]: 2.1a scrub starts
Oct 09 10:59:51 compute-2 ceph-mon[6044]: 2.1a scrub ok
Oct 09 10:59:51 compute-2 ceph-mon[6044]: 5.0 deep-scrub starts
Oct 09 10:59:51 compute-2 ceph-mon[6044]: 5.0 deep-scrub ok
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 09 10:59:51 compute-2 ceph-mon[6044]: osdmap e35: 3 total, 3 up, 3 in
Oct 09 10:59:51 compute-2 ceph-mon[6044]: pgmap v94: 194 pgs: 1 creating+peering, 44 peering, 149 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/574181055' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cjdyiw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.cjdyiw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:51 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:59:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct 09 10:59:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 09 10:59:51 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 10:59:52 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.8 deep-scrub starts
Oct 09 10:59:52 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.8 deep-scrub ok
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 6.18 scrub starts
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 6.18 scrub ok
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 7.1c scrub starts
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 7.1c scrub ok
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 5.e scrub starts
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 5.e scrub ok
Oct 09 10:59:52 compute-2 ceph-mon[6044]: Deploying daemon rgw.rgw.compute-0.cjdyiw on compute-0
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 6.1f deep-scrub starts
Oct 09 10:59:52 compute-2 ceph-mon[6044]: 6.1f deep-scrub ok
Oct 09 10:59:52 compute-2 ceph-mon[6044]: osdmap e36: 3 total, 3 up, 3 in
Oct 09 10:59:52 compute-2 ceph-mon[6044]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 10:59:52 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 10:59:52 compute-2 ceph-mon[6044]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 10:59:52 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/574181055' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 09 10:59:52 compute-2 ceph-mon[6044]: mgrmap e12: compute-0.izrudc(active, since 2m), standbys: compute-2.agiurv, compute-1.rtiqvm
Oct 09 10:59:52 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 10:59:52 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct 09 10:59:53 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 09 10:59:53 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 09 10:59:53 compute-2 ceph-mon[6044]: 2.17 scrub starts
Oct 09 10:59:53 compute-2 ceph-mon[6044]: 2.17 scrub ok
Oct 09 10:59:53 compute-2 ceph-mon[6044]: 3.8 deep-scrub starts
Oct 09 10:59:53 compute-2 ceph-mon[6044]: 3.8 deep-scrub ok
Oct 09 10:59:53 compute-2 ceph-mon[6044]: 4.f scrub starts
Oct 09 10:59:53 compute-2 ceph-mon[6044]: 4.f scrub ok
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 10:59:53 compute-2 ceph-mon[6044]: osdmap e37: 3 total, 3 up, 3 in
Oct 09 10:59:53 compute-2 ceph-mon[6044]: pgmap v97: 195 pgs: 1 unknown, 1 creating+peering, 44 peering, 149 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/731276261' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:53 compute-2 ceph-mon[6044]: from='mgr.14122 192.168.122.100:0/1750874438' entity='mgr.compute-0.izrudc' 
Oct 09 10:59:53 compute-2 ceph-mon[6044]: 4.4 scrub starts
Oct 09 10:59:53 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  1: '-n'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  2: 'mgr.compute-2.agiurv'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  3: '-f'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  4: '--setuser'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  5: 'ceph'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  6: '--setgroup'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  7: 'ceph'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  8: '--default-log-to-file=false'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  9: '--default-log-to-journald=true'
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 09 10:59:53 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 09 10:59:53 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 10:59:53 compute-2 sshd-session[3149]: Connection closed by 192.168.122.100 port 60254
Oct 09 10:59:53 compute-2 sshd-session[3120]: Connection closed by 192.168.122.100 port 60242
Oct 09 10:59:53 compute-2 sshd-session[3093]: Connection closed by 192.168.122.100 port 60232
Oct 09 10:59:53 compute-2 sshd-session[3064]: Connection closed by 192.168.122.100 port 60226
Oct 09 10:59:53 compute-2 sshd-session[3035]: Connection closed by 192.168.122.100 port 60224
Oct 09 10:59:53 compute-2 sshd-session[2861]: Connection closed by 192.168.122.100 port 60162
Oct 09 10:59:53 compute-2 sshd-session[2919]: Connection closed by 192.168.122.100 port 60186
Oct 09 10:59:53 compute-2 sshd-session[3006]: Connection closed by 192.168.122.100 port 60208
Oct 09 10:59:53 compute-2 sshd-session[2948]: Connection closed by 192.168.122.100 port 60198
Oct 09 10:59:53 compute-2 sshd-session[2977]: Connection closed by 192.168.122.100 port 60206
Oct 09 10:59:53 compute-2 sshd-session[2860]: Connection closed by 192.168.122.100 port 60158
Oct 09 10:59:53 compute-2 sshd-session[2890]: Connection closed by 192.168.122.100 port 60178
Oct 09 10:59:53 compute-2 sshd-session[2837]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 sshd-session[2945]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 sshd-session[2916]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 systemd[1]: session-4.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 sshd-session[3032]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 systemd[1]: session-9.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 sshd-session[3003]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 systemd[1]: session-8.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 sshd-session[3090]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 systemd[1]: session-15.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 sshd-session[3117]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 sshd-session[2974]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 4 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd[1]: session-11.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd[1]: session-12.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd[1]: session-14.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd[1]: session-10.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 9 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 8 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 11 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 14 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 12 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 15 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 10 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 sshd-session[2887]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 sshd-session[3146]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 sshd-session[3061]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 sshd-session[2855]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 4.
Oct 09 10:59:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setuser ceph since I am not root
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 9.
Oct 09 10:59:53 compute-2 systemd[1]: session-16.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd[1]: session-16.scope: Consumed 55.805s CPU time.
Oct 09 10:59:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setgroup ceph since I am not root
Oct 09 10:59:53 compute-2 systemd[1]: session-13.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd[1]: session-6.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd[1]: session-7.scope: Deactivated successfully.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 7 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 16 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 13 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Session 6 logged out. Waiting for processes to exit.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 8.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 15.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 11.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 12.
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 14.
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: pidfile_write: ignore empty --pid-file
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 10.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 16.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 13.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 6.
Oct 09 10:59:53 compute-2 systemd-logind[844]: Removed session 7.
Oct 09 10:59:53 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'alerts'
Oct 09 10:59:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:54.000+0000 7facda4a4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 10:59:54 compute-2 ceph-mgr[6348]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 10:59:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'balancer'
Oct 09 10:59:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:54.079+0000 7facda4a4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 10:59:54 compute-2 ceph-mgr[6348]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 10:59:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'cephadm'
Oct 09 10:59:54 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 09 10:59:54 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 09 10:59:54 compute-2 ceph-mon[6044]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 09 10:59:54 compute-2 ceph-mon[6044]: 7.12 deep-scrub starts
Oct 09 10:59:54 compute-2 ceph-mon[6044]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 09 10:59:54 compute-2 ceph-mon[6044]: 7.12 deep-scrub ok
Oct 09 10:59:54 compute-2 ceph-mon[6044]: 5.4 scrub starts
Oct 09 10:59:54 compute-2 ceph-mon[6044]: 5.4 scrub ok
Oct 09 10:59:54 compute-2 ceph-mon[6044]: 4.4 scrub ok
Oct 09 10:59:54 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/731276261' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 09 10:59:54 compute-2 ceph-mon[6044]: osdmap e38: 3 total, 3 up, 3 in
Oct 09 10:59:54 compute-2 ceph-mon[6044]: mgrmap e13: compute-0.izrudc(active, since 2m), standbys: compute-2.agiurv, compute-1.rtiqvm
Oct 09 10:59:54 compute-2 ceph-mon[6044]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 10:59:54 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 10:59:54 compute-2 ceph-mon[6044]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 10:59:54 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 10:59:54 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 10:59:54 compute-2 ceph-mon[6044]: 6.6 scrub starts
Oct 09 10:59:54 compute-2 ceph-mon[6044]: 6.6 scrub ok
Oct 09 10:59:54 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct 09 10:59:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'crash'
Oct 09 10:59:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:54.864+0000 7facda4a4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 10:59:54 compute-2 ceph-mgr[6348]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 10:59:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'dashboard'
Oct 09 10:59:55 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.1b deep-scrub starts
Oct 09 10:59:55 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.1b deep-scrub ok
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'devicehealth'
Oct 09 10:59:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:55.498+0000 7facda4a4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 10:59:55 compute-2 ceph-mon[6044]: 2.16 scrub starts
Oct 09 10:59:55 compute-2 ceph-mon[6044]: 2.16 scrub ok
Oct 09 10:59:55 compute-2 ceph-mon[6044]: 3.1d scrub starts
Oct 09 10:59:55 compute-2 ceph-mon[6044]: 3.1d scrub ok
Oct 09 10:59:55 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 10:59:55 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 10:59:55 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 10:59:55 compute-2 ceph-mon[6044]: osdmap e39: 3 total, 3 up, 3 in
Oct 09 10:59:55 compute-2 ceph-mon[6044]: 6.c scrub starts
Oct 09 10:59:55 compute-2 ceph-mon[6044]: 6.c scrub ok
Oct 09 10:59:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 10:59:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 10:59:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]:   from numpy import show_config as show_numpy_config
Oct 09 10:59:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:55.674+0000 7facda4a4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'influx'
Oct 09 10:59:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:55.756+0000 7facda4a4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'insights'
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'iostat'
Oct 09 10:59:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:55.892+0000 7facda4a4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 10:59:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'k8sevents'
Oct 09 10:59:56 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 10:59:56 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct 09 10:59:56 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 09 10:59:56 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 10:59:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'localpool'
Oct 09 10:59:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 10:59:56 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Oct 09 10:59:56 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 2.14 deep-scrub starts
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 2.14 deep-scrub ok
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 3.1b deep-scrub starts
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 3.1b deep-scrub ok
Oct 09 10:59:56 compute-2 ceph-mon[6044]: osdmap e40: 3 total, 3 up, 3 in
Oct 09 10:59:56 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 10:59:56 compute-2 ceph-mon[6044]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 10:59:56 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 10:59:56 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 10:59:56 compute-2 ceph-mon[6044]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 6.4 scrub starts
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 6.4 scrub ok
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 3.0 scrub starts
Oct 09 10:59:56 compute-2 ceph-mon[6044]: 3.0 scrub ok
Oct 09 10:59:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mirroring'
Oct 09 10:59:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'nfs'
Oct 09 10:59:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:56.948+0000 7facda4a4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 10:59:56 compute-2 ceph-mgr[6348]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 10:59:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'orchestrator'
Oct 09 10:59:57 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct 09 10:59:57 compute-2 ceph-mon[6044]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 09 10:59:57 compute-2 ceph-mon[6044]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 10:59:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:57.166+0000 7facda4a4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 10:59:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:57.245+0000 7facda4a4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_support'
Oct 09 10:59:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:57.317+0000 7facda4a4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 10:59:57 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 09 10:59:57 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 09 10:59:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:57.401+0000 7facda4a4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'progress'
Oct 09 10:59:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:57.474+0000 7facda4a4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'prometheus'
Oct 09 10:59:57 compute-2 ceph-mon[6044]: 7.17 scrub starts
Oct 09 10:59:57 compute-2 ceph-mon[6044]: 7.17 scrub ok
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 10:59:57 compute-2 ceph-mon[6044]: osdmap e41: 3 total, 3 up, 3 in
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? 192.168.122.102:0/330463100' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? 192.168.122.101:0/1032475736' entity='client.rgw.rgw.compute-1.vbxein' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 10:59:57 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 10:59:57 compute-2 ceph-mon[6044]: 6.0 scrub starts
Oct 09 10:59:57 compute-2 ceph-mon[6044]: 6.0 scrub ok
Oct 09 10:59:57 compute-2 ceph-mon[6044]: 5.d scrub starts
Oct 09 10:59:57 compute-2 ceph-mon[6044]: 5.d scrub ok
Oct 09 10:59:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:57.829+0000 7facda4a4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rbd_support'
Oct 09 10:59:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:57.927+0000 7facda4a4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 10:59:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'restful'
Oct 09 10:59:58 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct 09 10:59:58 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rgw'
Oct 09 10:59:58 compute-2 radosgw[11550]: v1 topic migration: starting v1 topic migration..
Oct 09 10:59:58 compute-2 radosgw[11550]: LDAP not started since no server URIs were provided in the configuration.
Oct 09 10:59:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-rgw-rgw-compute-2-klwwrz[11546]: 2025-10-09T10:59:58.349+0000 7f2528016980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 09 10:59:58 compute-2 radosgw[11550]: v1 topic migration: finished v1 topic migration
Oct 09 10:59:58 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct 09 10:59:58 compute-2 radosgw[11550]: framework: beast
Oct 09 10:59:58 compute-2 radosgw[11550]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 09 10:59:58 compute-2 radosgw[11550]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 09 10:59:58 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct 09 10:59:58 compute-2 radosgw[11550]: starting handler: beast
Oct 09 10:59:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:58.388+0000 7facda4a4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 10:59:58 compute-2 ceph-mgr[6348]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 10:59:58 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rook'
Oct 09 10:59:58 compute-2 radosgw[11550]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 10:59:58 compute-2 radosgw[11550]: mgrc service_daemon_register rgw.24148 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.klwwrz,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864100,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=1063f874-5e69-4914-9198-c2cdfb8f2870,zone_name=default,zonegroup_id=59510648-2c54-408c-beb4-010e0f01e98d,zonegroup_name=default}
Oct 09 10:59:58 compute-2 ceph-mon[6044]: 2.11 scrub starts
Oct 09 10:59:58 compute-2 ceph-mon[6044]: 2.11 scrub ok
Oct 09 10:59:58 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1576514846' entity='client.rgw.rgw.compute-0.cjdyiw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 10:59:58 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-1.vbxein' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 10:59:58 compute-2 ceph-mon[6044]: from='client.? ' entity='client.rgw.rgw.compute-2.klwwrz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 10:59:58 compute-2 ceph-mon[6044]: osdmap e42: 3 total, 3 up, 3 in
Oct 09 10:59:58 compute-2 ceph-mon[6044]: 4.0 scrub starts
Oct 09 10:59:58 compute-2 ceph-mon[6044]: 4.0 scrub ok
Oct 09 10:59:58 compute-2 ceph-mon[6044]: 5.b scrub starts
Oct 09 10:59:58 compute-2 ceph-mon[6044]: 5.b scrub ok
Oct 09 10:59:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:58.967+0000 7facda4a4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 10:59:58 compute-2 ceph-mgr[6348]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 10:59:58 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'selftest'
Oct 09 10:59:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:59.045+0000 7facda4a4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'snap_schedule'
Oct 09 10:59:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:59.133+0000 7facda4a4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'stats'
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'status'
Oct 09 10:59:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:59.284+0000 7facda4a4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telegraf'
Oct 09 10:59:59 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.e deep-scrub starts
Oct 09 10:59:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:59.358+0000 7facda4a4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telemetry'
Oct 09 10:59:59 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.e deep-scrub ok
Oct 09 10:59:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:59.520+0000 7facda4a4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 10:59:59 compute-2 ceph-mon[6044]: 7.15 scrub starts
Oct 09 10:59:59 compute-2 ceph-mon[6044]: 7.15 scrub ok
Oct 09 10:59:59 compute-2 ceph-mon[6044]: 4.7 deep-scrub starts
Oct 09 10:59:59 compute-2 ceph-mon[6044]: 4.7 deep-scrub ok
Oct 09 10:59:59 compute-2 ceph-mon[6044]: 3.e deep-scrub starts
Oct 09 10:59:59 compute-2 ceph-mon[6044]: 3.e deep-scrub ok
Oct 09 10:59:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T10:59:59.751+0000 7facda4a4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 10:59:59 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'volumes'
Oct 09 11:00:00 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:00.036+0000 7facda4a4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'zabbix'
Oct 09 11:00:00 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:00.110+0000 7facda4a4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: mgr load Constructed class from module: dashboard
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: [dashboard INFO root] Starting engine...
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: ms_deliver_dispatch: unhandled message 0x55c6c1b0f860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 11:00:00 compute-2 ceph-mgr[6348]: [dashboard INFO root] Engine started...
Oct 09 11:00:00 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 09 11:00:00 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 09 11:00:00 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct 09 11:00:00 compute-2 ceph-mon[6044]: 2.3 scrub starts
Oct 09 11:00:00 compute-2 ceph-mon[6044]: 2.3 scrub ok
Oct 09 11:00:00 compute-2 ceph-mon[6044]: overall HEALTH_OK
Oct 09 11:00:00 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv restarted
Oct 09 11:00:00 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv started
Oct 09 11:00:00 compute-2 ceph-mon[6044]: 6.f scrub starts
Oct 09 11:00:00 compute-2 ceph-mon[6044]: 6.f scrub ok
Oct 09 11:00:00 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm restarted
Oct 09 11:00:00 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm started
Oct 09 11:00:00 compute-2 ceph-mon[6044]: 5.8 scrub starts
Oct 09 11:00:00 compute-2 ceph-mon[6044]: 5.8 scrub ok
Oct 09 11:00:00 compute-2 ceph-mon[6044]: Active manager daemon compute-0.izrudc restarted
Oct 09 11:00:00 compute-2 ceph-mon[6044]: Activating manager daemon compute-0.izrudc
Oct 09 11:00:01 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:01 compute-2 systemd[1]: Starting system activity accounting tool...
Oct 09 11:00:01 compute-2 sshd-session[12215]: Accepted publickey for ceph-admin from 192.168.122.100 port 49042 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 11:00:01 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 11:00:01 compute-2 systemd[1]: Finished system activity accounting tool.
Oct 09 11:00:01 compute-2 systemd-logind[844]: New session 17 of user ceph-admin.
Oct 09 11:00:01 compute-2 systemd[1]: Started Session 17 of User ceph-admin.
Oct 09 11:00:01 compute-2 sshd-session[12215]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 11:00:01 compute-2 sudo[12220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:01 compute-2 sudo[12220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:01 compute-2 sudo[12220]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:01 compute-2 sudo[12245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 11:00:01 compute-2 sudo[12245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:01 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 09 11:00:01 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 09 11:00:02 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 09 11:00:02 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 09 11:00:02 compute-2 podman[12343]: 2025-10-09 11:00:02.950551013 +0000 UTC m=+1.229453671 container exec 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 09 11:00:03 compute-2 ceph-mon[6044]: 2.0 scrub starts
Oct 09 11:00:03 compute-2 ceph-mon[6044]: 2.0 scrub ok
Oct 09 11:00:03 compute-2 ceph-mon[6044]: osdmap e43: 3 total, 3 up, 3 in
Oct 09 11:00:03 compute-2 ceph-mon[6044]: mgrmap e14: compute-0.izrudc(active, starting, since 0.0679689s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: Manager daemon compute-0.izrudc is now available
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct 09 11:00:03 compute-2 ceph-mon[6044]: 4.b scrub starts
Oct 09 11:00:03 compute-2 ceph-mon[6044]: 4.b scrub ok
Oct 09 11:00:03 compute-2 ceph-mon[6044]: 3.11 scrub starts
Oct 09 11:00:03 compute-2 ceph-mon[6044]: 3.11 scrub ok
Oct 09 11:00:03 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 09 11:00:03 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 09 11:00:03 compute-2 podman[12343]: 2025-10-09 11:00:03.591507176 +0000 UTC m=+1.870409814 container exec_died 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Oct 09 11:00:04 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 09 11:00:04 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 7.0 scrub starts
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 7.0 scrub ok
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 7.7 scrub starts
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 7.7 scrub ok
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 6.9 scrub starts
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 6.9 scrub ok
Oct 09 11:00:05 compute-2 ceph-mon[6044]: mgrmap e15: compute-0.izrudc(active, since 1.76721s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 5.12 scrub starts
Oct 09 11:00:05 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:02] ENGINE Bus STARTING
Oct 09 11:00:05 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:02] ENGINE Serving on https://192.168.122.100:7150
Oct 09 11:00:05 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:02] ENGINE Client ('192.168.122.100', 60264) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 11:00:05 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:02] ENGINE Serving on http://192.168.122.100:8765
Oct 09 11:00:05 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:02] ENGINE Bus STARTED
Oct 09 11:00:05 compute-2 ceph-mon[6044]: pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 7.1 scrub starts
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 7.1 scrub ok
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 6.b scrub starts
Oct 09 11:00:05 compute-2 ceph-mon[6044]: 6.b scrub ok
Oct 09 11:00:05 compute-2 sudo[12245]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:05 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct 09 11:00:05 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:06 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct 09 11:00:06 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 5.12 scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 3.15 scrub starts
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 3.15 scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 7.d scrub starts
Oct 09 11:00:06 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 7.d scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 4.16 scrub starts
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 4.16 scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 5.13 scrub starts
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 5.13 scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:00:06 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:06 compute-2 ceph-mon[6044]: mgrmap e16: compute-0.izrudc(active, since 4s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 6.14 deep-scrub starts
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 6.14 deep-scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 2.18 scrub starts
Oct 09 11:00:06 compute-2 ceph-mon[6044]: 2.18 scrub ok
Oct 09 11:00:06 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:07 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 09 11:00:07 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 09 11:00:08 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Oct 09 11:00:08 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Oct 09 11:00:08 compute-2 sudo[12447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:08 compute-2 sudo[12447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:08 compute-2 sudo[12447]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:08 compute-2 sudo[12472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 11:00:08 compute-2 sudo[12472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:08 compute-2 ceph-mon[6044]: from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 7.c scrub starts
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 7.c scrub ok
Oct 09 11:00:08 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 7.19 scrub starts
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 7.19 scrub ok
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 6.16 scrub starts
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 6.16 scrub ok
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 2.5 scrub starts
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 2.5 scrub ok
Oct 09 11:00:08 compute-2 ceph-mon[6044]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:00:08 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:08 compute-2 ceph-mon[6044]: mgrmap e17: compute-0.izrudc(active, since 6s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 6.11 scrub starts
Oct 09 11:00:08 compute-2 ceph-mon[6044]: 6.11 scrub ok
Oct 09 11:00:08 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:08 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:09 compute-2 sudo[12472]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:09 compute-2 sudo[12528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:09 compute-2 sudo[12528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:09 compute-2 sudo[12528]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:09 compute-2 sudo[12553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 11:00:09 compute-2 sudo[12553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:09 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Oct 09 11:00:09 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Oct 09 11:00:09 compute-2 sudo[12553]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:09 compute-2 systemd[2841]: Starting Mark boot as successful...
Oct 09 11:00:09 compute-2 systemd[2841]: Finished Mark boot as successful.
Oct 09 11:00:10 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 09 11:00:10 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 7.1a scrub starts
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 7.1a scrub ok
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 6.1 scrub starts
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 6.1 scrub ok
Oct 09 11:00:10 compute-2 ceph-mon[6044]: from='client.14427 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 11:00:10 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 5.1f scrub starts
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 5.1f scrub ok
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 6.10 scrub starts
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 6.10 scrub ok
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 4.1 scrub starts
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 4.1 scrub ok
Oct 09 11:00:10 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:10 compute-2 ceph-mon[6044]: pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s
Oct 09 11:00:10 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 6.13 scrub starts
Oct 09 11:00:10 compute-2 ceph-mon[6044]: 6.13 scrub ok
Oct 09 11:00:10 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:10 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 11:00:11 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 09 11:00:11 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 09 11:00:11 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 5.11 scrub starts
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 5.11 scrub ok
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 6.1c scrub starts
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 6.1c scrub ok
Oct 09 11:00:12 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:12 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 09 11:00:12 compute-2 ceph-mon[6044]: from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 3.16 deep-scrub starts
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 3.16 deep-scrub ok
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 6.1d deep-scrub starts
Oct 09 11:00:12 compute-2 ceph-mon[6044]: 6.1d deep-scrub ok
Oct 09 11:00:12 compute-2 ceph-mon[6044]: pgmap v7: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Oct 09 11:00:12 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Oct 09 11:00:12 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Oct 09 11:00:12 compute-2 sudo[12596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 11:00:12 compute-2 sudo[12596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:12 compute-2 sudo[12596]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 11:00:13 compute-2 sudo[12621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12621]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12646]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:13 compute-2 sudo[12671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12671]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12696]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Oct 09 11:00:13 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Oct 09 11:00:13 compute-2 sudo[12744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12744]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12769]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 11:00:13 compute-2 sudo[12794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12794]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:13 compute-2 sudo[12819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12819]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:13 compute-2 sudo[12844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12844]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12869]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:13 compute-2 sudo[12894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12894]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 4.2 scrub starts
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 4.2 scrub ok
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 5.10 scrub starts
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 5.10 scrub ok
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 5.19 deep-scrub starts
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 5.19 deep-scrub ok
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 4.3 scrub starts
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 4.3 scrub ok
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 5.15 scrub starts
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 5.15 scrub ok
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 3.1e scrub starts
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 3.1e scrub ok
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 7.1d deep-scrub starts
Oct 09 11:00:13 compute-2 ceph-mon[6044]: 7.1d deep-scrub ok
Oct 09 11:00:13 compute-2 ceph-mon[6044]: pgmap v8: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:13 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 11:00:13 compute-2 sudo[12919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12919]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12967]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[12992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:13 compute-2 sudo[12992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[12992]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[13017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:13 compute-2 sudo[13017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[13017]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:13 compute-2 sudo[13042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 11:00:13 compute-2 sudo[13042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:13 compute-2 sudo[13042]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 11:00:14 compute-2 sudo[13067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13067]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13092]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:14 compute-2 sudo[13117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13117]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13142]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13190]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Oct 09 11:00:14 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Oct 09 11:00:14 compute-2 sudo[13215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13215]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:14 compute-2 sudo[13240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13240]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:14 compute-2 sudo[13265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13265]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:14 compute-2 sudo[13290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13290]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13315]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:14 compute-2 sudo[13340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13340]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13365]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13413]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:14 compute-2 sudo[13438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13438]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:14 compute-2 sudo[13463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:14 compute-2 sudo[13463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:14 compute-2 sudo[13463]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:15 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 09 11:00:15 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 09 11:00:15 compute-2 ceph-mon[6044]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 11:00:15 compute-2 ceph-mon[6044]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 11:00:15 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 3.13 scrub starts
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 3.13 scrub ok
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 7.18 scrub starts
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 7.18 scrub ok
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 4.6 deep-scrub starts
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 4.6 deep-scrub ok
Oct 09 11:00:15 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:15 compute-2 ceph-mon[6044]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:15 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:15 compute-2 ceph-mon[6044]: from='client.14439 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 3.4 scrub starts
Oct 09 11:00:15 compute-2 ceph-mon[6044]: 3.4 scrub ok
Oct 09 11:00:16 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 09 11:00:16 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 09 11:00:16 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:16 compute-2 ceph-mon[6044]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 3.10 scrub starts
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 3.10 scrub ok
Oct 09 11:00:16 compute-2 ceph-mon[6044]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 6.1b scrub starts
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 6.1b scrub ok
Oct 09 11:00:16 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:16 compute-2 ceph-mon[6044]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:16 compute-2 ceph-mon[6044]: pgmap v9: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Oct 09 11:00:16 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:16 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 5.16 deep-scrub starts
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 5.16 deep-scrub ok
Oct 09 11:00:16 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 7.1e scrub starts
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 7.1e scrub ok
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 7.5 scrub starts
Oct 09 11:00:16 compute-2 ceph-mon[6044]: 7.5 scrub ok
Oct 09 11:00:16 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:16 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/622963856' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 09 11:00:16 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:16 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  1: '-n'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  2: 'mgr.compute-2.agiurv'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  3: '-f'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  4: '--setuser'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  5: 'ceph'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  6: '--setgroup'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  7: 'ceph'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  8: '--default-log-to-file=false'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  9: '--default-log-to-journald=true'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: mgr respawn  exe_path /proc/self/exe
Oct 09 11:00:16 compute-2 sshd-session[12219]: Connection closed by 192.168.122.100 port 49042
Oct 09 11:00:16 compute-2 sshd-session[12215]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 11:00:16 compute-2 systemd[1]: session-17.scope: Deactivated successfully.
Oct 09 11:00:16 compute-2 systemd[1]: session-17.scope: Consumed 4.057s CPU time.
Oct 09 11:00:16 compute-2 systemd-logind[844]: Session 17 logged out. Waiting for processes to exit.
Oct 09 11:00:16 compute-2 systemd-logind[844]: Removed session 17.
Oct 09 11:00:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setuser ceph since I am not root
Oct 09 11:00:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setgroup ceph since I am not root
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 11:00:16 compute-2 ceph-mgr[6348]: pidfile_write: ignore empty --pid-file
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'alerts'
Oct 09 11:00:17 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:17.117+0000 7fbb9627f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'balancer'
Oct 09 11:00:17 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:17.199+0000 7fbb9627f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'cephadm'
Oct 09 11:00:17 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Oct 09 11:00:17 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Oct 09 11:00:17 compute-2 ceph-mon[6044]: 6.15 deep-scrub starts
Oct 09 11:00:17 compute-2 ceph-mon[6044]: 6.15 deep-scrub ok
Oct 09 11:00:17 compute-2 ceph-mon[6044]: 3.2 deep-scrub starts
Oct 09 11:00:17 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:17 compute-2 ceph-mon[6044]: 3.2 deep-scrub ok
Oct 09 11:00:17 compute-2 ceph-mon[6044]: 4.1c scrub starts
Oct 09 11:00:17 compute-2 ceph-mon[6044]: 4.1c scrub ok
Oct 09 11:00:17 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:17 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:17 compute-2 ceph-mon[6044]: from='mgr.14385 192.168.122.100:0/3914712709' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:17 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/622963856' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 09 11:00:17 compute-2 ceph-mon[6044]: mgrmap e18: compute-0.izrudc(active, since 16s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'crash'
Oct 09 11:00:17 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:17.995+0000 7fbb9627f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 11:00:17 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'dashboard'
Oct 09 11:00:18 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 09 11:00:18 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 3.14 scrub starts
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 3.14 scrub ok
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 5.5 scrub starts
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 5.5 scrub ok
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 2.1b deep-scrub starts
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 2.1b deep-scrub ok
Oct 09 11:00:18 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3692881939' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 5.9 scrub starts
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 5.9 scrub ok
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 7.1b scrub starts
Oct 09 11:00:18 compute-2 ceph-mon[6044]: 7.1b scrub ok
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'devicehealth'
Oct 09 11:00:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:18.630+0000 7fbb9627f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 11:00:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 11:00:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 11:00:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]:   from numpy import show_config as show_numpy_config
Oct 09 11:00:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:18.799+0000 7fbb9627f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'influx'
Oct 09 11:00:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:18.870+0000 7fbb9627f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'insights'
Oct 09 11:00:18 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'iostat'
Oct 09 11:00:19 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:19.005+0000 7fbb9627f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'k8sevents'
Oct 09 11:00:19 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct 09 11:00:19 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'localpool'
Oct 09 11:00:19 compute-2 ceph-mon[6044]: 7.1f scrub starts
Oct 09 11:00:19 compute-2 ceph-mon[6044]: 7.1f scrub ok
Oct 09 11:00:19 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3692881939' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 09 11:00:19 compute-2 ceph-mon[6044]: mgrmap e19: compute-0.izrudc(active, since 18s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:19 compute-2 ceph-mon[6044]: 3.f deep-scrub starts
Oct 09 11:00:19 compute-2 ceph-mon[6044]: 3.f deep-scrub ok
Oct 09 11:00:19 compute-2 ceph-mon[6044]: 7.6 scrub starts
Oct 09 11:00:19 compute-2 ceph-mon[6044]: 7.6 scrub ok
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mirroring'
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'nfs'
Oct 09 11:00:19 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:19.968+0000 7fbb9627f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 11:00:19 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'orchestrator'
Oct 09 11:00:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:20.182+0000 7fbb9627f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 11:00:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:20.254+0000 7fbb9627f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_support'
Oct 09 11:00:20 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct 09 11:00:20 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct 09 11:00:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:20.322+0000 7fbb9627f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 11:00:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:20.399+0000 7fbb9627f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'progress'
Oct 09 11:00:20 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct 09 11:00:20 compute-2 ceph-mon[6044]: 4.1d scrub starts
Oct 09 11:00:20 compute-2 ceph-mon[6044]: 4.1d scrub ok
Oct 09 11:00:20 compute-2 ceph-mon[6044]: 6.a deep-scrub starts
Oct 09 11:00:20 compute-2 ceph-mon[6044]: 6.a deep-scrub ok
Oct 09 11:00:20 compute-2 ceph-mon[6044]: 5.3 scrub starts
Oct 09 11:00:20 compute-2 ceph-mon[6044]: 5.3 scrub ok
Oct 09 11:00:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:20.473+0000 7fbb9627f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'prometheus'
Oct 09 11:00:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:20.823+0000 7fbb9627f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rbd_support'
Oct 09 11:00:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:20.925+0000 7fbb9627f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 11:00:20 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'restful'
Oct 09 11:00:21 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rgw'
Oct 09 11:00:21 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1e scrub starts
Oct 09 11:00:21 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.1e scrub ok
Oct 09 11:00:21 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:21.364+0000 7fbb9627f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 11:00:21 compute-2 ceph-mgr[6348]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 11:00:21 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rook'
Oct 09 11:00:21 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:21 compute-2 ceph-mon[6044]: 2.b scrub starts
Oct 09 11:00:21 compute-2 ceph-mon[6044]: 2.b scrub ok
Oct 09 11:00:21 compute-2 ceph-mon[6044]: osdmap e44: 3 total, 3 up, 3 in
Oct 09 11:00:21 compute-2 ceph-mon[6044]: 3.c scrub starts
Oct 09 11:00:21 compute-2 ceph-mon[6044]: 3.c scrub ok
Oct 09 11:00:21 compute-2 ceph-mon[6044]: 3.1 deep-scrub starts
Oct 09 11:00:21 compute-2 ceph-mon[6044]: 3.1 deep-scrub ok
Oct 09 11:00:21 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:21.900+0000 7fbb9627f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 11:00:21 compute-2 ceph-mgr[6348]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 11:00:21 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'selftest'
Oct 09 11:00:21 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:21.969+0000 7fbb9627f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 11:00:21 compute-2 ceph-mgr[6348]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 11:00:21 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'snap_schedule'
Oct 09 11:00:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:22.043+0000 7fbb9627f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'stats'
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'status'
Oct 09 11:00:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:22.188+0000 7fbb9627f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telegraf'
Oct 09 11:00:22 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct 09 11:00:22 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct 09 11:00:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:22.261+0000 7fbb9627f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telemetry'
Oct 09 11:00:22 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct 09 11:00:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:22.426+0000 7fbb9627f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 11:00:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:22.652+0000 7fbb9627f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'volumes'
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 6.1e scrub starts
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 6.1e scrub ok
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 3.d scrub starts
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 3.d scrub ok
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 3.6 scrub starts
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 3.6 scrub ok
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 2.c scrub starts
Oct 09 11:00:22 compute-2 ceph-mon[6044]: 2.c scrub ok
Oct 09 11:00:22 compute-2 ceph-mon[6044]: osdmap e45: 3 total, 3 up, 3 in
Oct 09 11:00:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:22.931+0000 7fbb9627f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:00:22 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'zabbix'
Oct 09 11:00:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:23.006+0000 7fbb9627f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: ms_deliver_dispatch: unhandled message 0x563b6ca2b860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 11:00:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setuser ceph since I am not root
Oct 09 11:00:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setgroup ceph since I am not root
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: pidfile_write: ignore empty --pid-file
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'alerts'
Oct 09 11:00:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:23.210+0000 7f3f5089b140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'balancer'
Oct 09 11:00:23 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 09 11:00:23 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 09 11:00:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:23.288+0000 7f3f5089b140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 11:00:23 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'cephadm'
Oct 09 11:00:23 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct 09 11:00:23 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct 09 11:00:23 compute-2 ceph-mon[6044]: 6.8 scrub starts
Oct 09 11:00:23 compute-2 ceph-mon[6044]: 6.8 scrub ok
Oct 09 11:00:23 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv restarted
Oct 09 11:00:23 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv started
Oct 09 11:00:23 compute-2 ceph-mon[6044]: 7.3 scrub starts
Oct 09 11:00:23 compute-2 ceph-mon[6044]: 7.3 scrub ok
Oct 09 11:00:23 compute-2 ceph-mon[6044]: 7.a scrub starts
Oct 09 11:00:23 compute-2 ceph-mon[6044]: 7.a scrub ok
Oct 09 11:00:23 compute-2 ceph-mon[6044]: Active manager daemon compute-0.izrudc restarted
Oct 09 11:00:23 compute-2 ceph-mon[6044]: Active manager daemon compute-0.izrudc restarted
Oct 09 11:00:23 compute-2 ceph-mon[6044]: Activating manager daemon compute-0.izrudc
Oct 09 11:00:23 compute-2 ceph-mon[6044]: osdmap e46: 3 total, 3 up, 3 in
Oct 09 11:00:23 compute-2 ceph-mon[6044]: osdmap e47: 3 total, 3 up, 3 in
Oct 09 11:00:23 compute-2 ceph-mon[6044]: mgrmap e20: compute-0.izrudc(active, starting, since 0.220637s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:23 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm restarted
Oct 09 11:00:23 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm started
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'crash'
Oct 09 11:00:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:24.130+0000 7f3f5089b140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'dashboard'
Oct 09 11:00:24 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 09 11:00:24 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'devicehealth'
Oct 09 11:00:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:24.814+0000 7f3f5089b140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 11:00:24 compute-2 ceph-mon[6044]: 4.a scrub starts
Oct 09 11:00:24 compute-2 ceph-mon[6044]: 4.a scrub ok
Oct 09 11:00:24 compute-2 ceph-mon[6044]: 7.2 scrub starts
Oct 09 11:00:24 compute-2 ceph-mon[6044]: 7.2 scrub ok
Oct 09 11:00:24 compute-2 ceph-mon[6044]: 7.14 scrub starts
Oct 09 11:00:24 compute-2 ceph-mon[6044]: 7.14 scrub ok
Oct 09 11:00:24 compute-2 ceph-mon[6044]: mgrmap e21: compute-0.izrudc(active, starting, since 1.26552s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 11:00:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 11:00:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]:   from numpy import show_config as show_numpy_config
Oct 09 11:00:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:24.987+0000 7f3f5089b140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 11:00:24 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'influx'
Oct 09 11:00:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:25.055+0000 7f3f5089b140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'insights'
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'iostat'
Oct 09 11:00:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:25.191+0000 7f3f5089b140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'k8sevents'
Oct 09 11:00:25 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 09 11:00:25 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'localpool'
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mirroring'
Oct 09 11:00:25 compute-2 ceph-mon[6044]: 4.d deep-scrub starts
Oct 09 11:00:25 compute-2 ceph-mon[6044]: 4.d deep-scrub ok
Oct 09 11:00:25 compute-2 ceph-mon[6044]: 2.1 deep-scrub starts
Oct 09 11:00:25 compute-2 ceph-mon[6044]: 2.1 deep-scrub ok
Oct 09 11:00:25 compute-2 ceph-mon[6044]: 4.19 scrub starts
Oct 09 11:00:25 compute-2 ceph-mon[6044]: 4.19 scrub ok
Oct 09 11:00:25 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'nfs'
Oct 09 11:00:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:26.172+0000 7f3f5089b140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'orchestrator'
Oct 09 11:00:26 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.f scrub starts
Oct 09 11:00:26 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.f scrub ok
Oct 09 11:00:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:26.403+0000 7f3f5089b140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 11:00:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:26.481+0000 7f3f5089b140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_support'
Oct 09 11:00:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:26.546+0000 7f3f5089b140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 11:00:26 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:26.627+0000 7f3f5089b140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'progress'
Oct 09 11:00:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:26.703+0000 7f3f5089b140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 11:00:26 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'prometheus'
Oct 09 11:00:26 compute-2 ceph-mon[6044]: 5.7 scrub starts
Oct 09 11:00:26 compute-2 ceph-mon[6044]: 5.7 scrub ok
Oct 09 11:00:26 compute-2 ceph-mon[6044]: 7.e scrub starts
Oct 09 11:00:26 compute-2 ceph-mon[6044]: 7.e scrub ok
Oct 09 11:00:26 compute-2 ceph-mon[6044]: 2.f scrub starts
Oct 09 11:00:26 compute-2 ceph-mon[6044]: 2.f scrub ok
Oct 09 11:00:27 compute-2 systemd[1]: Stopping User Manager for UID 42477...
Oct 09 11:00:27 compute-2 systemd[2841]: Activating special unit Exit the Session...
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped target Main User Target.
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped target Basic System.
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped target Paths.
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped target Sockets.
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped target Timers.
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 11:00:27 compute-2 systemd[2841]: Closed D-Bus User Message Bus Socket.
Oct 09 11:00:27 compute-2 systemd[2841]: Stopped Create User's Volatile Files and Directories.
Oct 09 11:00:27 compute-2 systemd[2841]: Removed slice User Application Slice.
Oct 09 11:00:27 compute-2 systemd[2841]: Reached target Shutdown.
Oct 09 11:00:27 compute-2 systemd[2841]: Finished Exit the Session.
Oct 09 11:00:27 compute-2 systemd[2841]: Reached target Exit the Session.
Oct 09 11:00:27 compute-2 systemd[1]: user@42477.service: Deactivated successfully.
Oct 09 11:00:27 compute-2 systemd[1]: Stopped User Manager for UID 42477.
Oct 09 11:00:27 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 09 11:00:27 compute-2 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 09 11:00:27 compute-2 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 09 11:00:27 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 09 11:00:27 compute-2 systemd[1]: Removed slice User Slice of UID 42477.
Oct 09 11:00:27 compute-2 systemd[1]: user-42477.slice: Consumed 1min 991ms CPU time.
Oct 09 11:00:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:27.081+0000 7f3f5089b140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 11:00:27 compute-2 ceph-mgr[6348]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 11:00:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rbd_support'
Oct 09 11:00:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:27.173+0000 7f3f5089b140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 11:00:27 compute-2 ceph-mgr[6348]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 11:00:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'restful'
Oct 09 11:00:27 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct 09 11:00:27 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct 09 11:00:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rgw'
Oct 09 11:00:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:27.596+0000 7f3f5089b140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 11:00:27 compute-2 ceph-mgr[6348]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 11:00:27 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rook'
Oct 09 11:00:27 compute-2 ceph-mon[6044]: 4.5 scrub starts
Oct 09 11:00:27 compute-2 ceph-mon[6044]: 4.5 scrub ok
Oct 09 11:00:27 compute-2 ceph-mon[6044]: 7.4 scrub starts
Oct 09 11:00:27 compute-2 ceph-mon[6044]: 7.4 scrub ok
Oct 09 11:00:27 compute-2 ceph-mon[6044]: 2.10 scrub starts
Oct 09 11:00:27 compute-2 ceph-mon[6044]: 2.10 scrub ok
Oct 09 11:00:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:28.184+0000 7f3f5089b140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'selftest'
Oct 09 11:00:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:28.259+0000 7f3f5089b140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'snap_schedule'
Oct 09 11:00:28 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Oct 09 11:00:28 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Oct 09 11:00:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:28.340+0000 7f3f5089b140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'stats'
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'status'
Oct 09 11:00:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:28.487+0000 7f3f5089b140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telegraf'
Oct 09 11:00:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:28.560+0000 7f3f5089b140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telemetry'
Oct 09 11:00:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:28.719+0000 7f3f5089b140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 11:00:28 compute-2 ceph-mon[6044]: 6.7 scrub starts
Oct 09 11:00:28 compute-2 ceph-mon[6044]: 6.7 scrub ok
Oct 09 11:00:28 compute-2 ceph-mon[6044]: 5.6 scrub starts
Oct 09 11:00:28 compute-2 ceph-mon[6044]: 5.6 scrub ok
Oct 09 11:00:28 compute-2 ceph-mon[6044]: 7.16 deep-scrub starts
Oct 09 11:00:28 compute-2 ceph-mon[6044]: 7.16 deep-scrub ok
Oct 09 11:00:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:28.944+0000 7f3f5089b140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 11:00:28 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'volumes'
Oct 09 11:00:29 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:29.239+0000 7f3f5089b140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'zabbix'
Oct 09 11:00:29 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Oct 09 11:00:29 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Oct 09 11:00:29 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:00:29.316+0000 7f3f5089b140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: mgr load Constructed class from module: dashboard
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: [dashboard INFO root] Starting engine...
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: ms_deliver_dispatch: unhandled message 0x55942cf4b860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 11:00:29 compute-2 ceph-mgr[6348]: [dashboard INFO root] Engine started...
Oct 09 11:00:29 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct 09 11:00:29 compute-2 ceph-mon[6044]: 3.a deep-scrub starts
Oct 09 11:00:29 compute-2 ceph-mon[6044]: 3.a deep-scrub ok
Oct 09 11:00:29 compute-2 ceph-mon[6044]: 3.b scrub starts
Oct 09 11:00:29 compute-2 ceph-mon[6044]: 3.b scrub ok
Oct 09 11:00:29 compute-2 ceph-mon[6044]: 6.17 scrub starts
Oct 09 11:00:29 compute-2 ceph-mon[6044]: 6.17 scrub ok
Oct 09 11:00:29 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv restarted
Oct 09 11:00:29 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv started
Oct 09 11:00:29 compute-2 ceph-mon[6044]: Active manager daemon compute-0.izrudc restarted
Oct 09 11:00:29 compute-2 ceph-mon[6044]: Activating manager daemon compute-0.izrudc
Oct 09 11:00:29 compute-2 ceph-mon[6044]: osdmap e48: 3 total, 3 up, 3 in
Oct 09 11:00:29 compute-2 ceph-mon[6044]: mgrmap e22: compute-0.izrudc(active, starting, since 0.0750165s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 11:00:29 compute-2 ceph-mon[6044]: Manager daemon compute-0.izrudc is now available
Oct 09 11:00:29 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct 09 11:00:30 compute-2 sshd-session[13566]: Accepted publickey for ceph-admin from 192.168.122.100 port 56888 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 11:00:30 compute-2 systemd-logind[844]: New session 18 of user ceph-admin.
Oct 09 11:00:30 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Oct 09 11:00:30 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 09 11:00:30 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 09 11:00:30 compute-2 systemd[1]: Starting User Manager for UID 42477...
Oct 09 11:00:30 compute-2 systemd[13570]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 11:00:30 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 09 11:00:30 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 09 11:00:30 compute-2 systemd[13570]: Queued start job for default target Main User Target.
Oct 09 11:00:30 compute-2 systemd[13570]: Created slice User Application Slice.
Oct 09 11:00:30 compute-2 systemd[13570]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 11:00:30 compute-2 systemd[13570]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 11:00:30 compute-2 systemd[13570]: Reached target Paths.
Oct 09 11:00:30 compute-2 systemd[13570]: Reached target Timers.
Oct 09 11:00:30 compute-2 systemd[13570]: Starting D-Bus User Message Bus Socket...
Oct 09 11:00:30 compute-2 systemd[13570]: Starting Create User's Volatile Files and Directories...
Oct 09 11:00:30 compute-2 systemd[13570]: Listening on D-Bus User Message Bus Socket.
Oct 09 11:00:30 compute-2 systemd[13570]: Reached target Sockets.
Oct 09 11:00:30 compute-2 systemd[13570]: Finished Create User's Volatile Files and Directories.
Oct 09 11:00:30 compute-2 systemd[13570]: Reached target Basic System.
Oct 09 11:00:30 compute-2 systemd[13570]: Reached target Main User Target.
Oct 09 11:00:30 compute-2 systemd[13570]: Startup finished in 123ms.
Oct 09 11:00:30 compute-2 systemd[1]: Started User Manager for UID 42477.
Oct 09 11:00:30 compute-2 systemd[1]: Started Session 18 of User ceph-admin.
Oct 09 11:00:30 compute-2 sshd-session[13566]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 11:00:30 compute-2 sudo[13586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:30 compute-2 sudo[13586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:30 compute-2 sudo[13586]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:30 compute-2 sudo[13611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 11:00:30 compute-2 sudo[13611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:30 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e2 new map
Oct 09 11:00:30 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e2 print_map
                                          e2
                                          btime 2025-10-09T11:00:30:843595+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        2
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:30.843558+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        
                                          up        {}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                           
                                           
Oct 09 11:00:30 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct 09 11:00:30 compute-2 ceph-mon[6044]: 5.2 scrub starts
Oct 09 11:00:30 compute-2 ceph-mon[6044]: 5.2 scrub ok
Oct 09 11:00:30 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct 09 11:00:30 compute-2 ceph-mon[6044]: 7.8 scrub starts
Oct 09 11:00:30 compute-2 ceph-mon[6044]: 7.8 scrub ok
Oct 09 11:00:30 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm restarted
Oct 09 11:00:30 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm started
Oct 09 11:00:30 compute-2 ceph-mon[6044]: 4.15 scrub starts
Oct 09 11:00:30 compute-2 ceph-mon[6044]: 4.15 scrub ok
Oct 09 11:00:30 compute-2 ceph-mon[6044]: mgrmap e23: compute-0.izrudc(active, since 1.10082s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:30 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 09 11:00:30 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 09 11:00:30 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 09 11:00:30 compute-2 ceph-mon[6044]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 09 11:00:30 compute-2 ceph-mon[6044]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 09 11:00:30 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 09 11:00:30 compute-2 ceph-mon[6044]: osdmap e49: 3 total, 3 up, 3 in
Oct 09 11:00:30 compute-2 ceph-mon[6044]: fsmap cephfs:0
Oct 09 11:00:30 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:31 compute-2 podman[13706]: 2025-10-09 11:00:31.055947987 +0000 UTC m=+0.049486920 container exec 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 11:00:31 compute-2 podman[13706]: 2025-10-09 11:00:31.139345956 +0000 UTC m=+0.132884859 container exec_died 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 11:00:31 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct 09 11:00:31 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct 09 11:00:31 compute-2 sudo[13611]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:31 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:31 compute-2 sudo[13813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:31 compute-2 sudo[13813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:31 compute-2 sudo[13813]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:31 compute-2 sudo[13838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 11:00:31 compute-2 sudo[13838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:31 compute-2 ceph-mon[6044]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 11:00:31 compute-2 ceph-mon[6044]: 3.5 scrub starts
Oct 09 11:00:31 compute-2 ceph-mon[6044]: 3.5 scrub ok
Oct 09 11:00:31 compute-2 ceph-mon[6044]: 3.7 deep-scrub starts
Oct 09 11:00:31 compute-2 ceph-mon[6044]: 3.7 deep-scrub ok
Oct 09 11:00:31 compute-2 ceph-mon[6044]: 2.13 scrub starts
Oct 09 11:00:31 compute-2 ceph-mon[6044]: 2.13 scrub ok
Oct 09 11:00:31 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:31] ENGINE Bus STARTING
Oct 09 11:00:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:32 compute-2 sudo[13838]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:32 compute-2 sudo[13894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:32 compute-2 sudo[13894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:32 compute-2 sudo[13894]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:32 compute-2 sudo[13919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 11:00:32 compute-2 sudo[13919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:32 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Oct 09 11:00:32 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Oct 09 11:00:32 compute-2 sudo[13919]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:32 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:31] ENGINE Serving on http://192.168.122.100:8765
Oct 09 11:00:32 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:31] ENGINE Serving on https://192.168.122.100:7150
Oct 09 11:00:32 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:31] ENGINE Bus STARTED
Oct 09 11:00:32 compute-2 ceph-mon[6044]: [09/Oct/2025:11:00:31] ENGINE Client ('192.168.122.100', 57166) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 11:00:32 compute-2 ceph-mon[6044]: pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='client.14505 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 11:00:32 compute-2 ceph-mon[6044]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 11:00:32 compute-2 ceph-mon[6044]: 3.3 scrub starts
Oct 09 11:00:32 compute-2 ceph-mon[6044]: 3.3 scrub ok
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:32 compute-2 ceph-mon[6044]: 5.1e scrub starts
Oct 09 11:00:32 compute-2 ceph-mon[6044]: 5.1e scrub ok
Oct 09 11:00:32 compute-2 ceph-mon[6044]: 2.12 deep-scrub starts
Oct 09 11:00:32 compute-2 ceph-mon[6044]: 2.12 deep-scrub ok
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 11:00:32 compute-2 ceph-mon[6044]: mgrmap e24: compute-0.izrudc(active, since 3s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 11:00:33 compute-2 sudo[13962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 11:00:33 compute-2 sudo[13962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[13962]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[13987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 11:00:33 compute-2 sudo[13987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[13987]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14012]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:33 compute-2 sudo[14037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14037]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14062]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 09 11:00:33 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 09 11:00:33 compute-2 sudo[14110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14110]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14135]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 11:00:33 compute-2 sudo[14160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14160]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:33 compute-2 sudo[14185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14185]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:33 compute-2 sudo[14210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14210]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14235]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:33 compute-2 sudo[14260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14260]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct 09 11:00:33 compute-2 sudo[14285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14285]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14333]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:33 compute-2 sudo[14358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:00:33 compute-2 sudo[14358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:33 compute-2 sudo[14358]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='client.14517 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 11:00:34 compute-2 ceph-mon[6044]: 4.e scrub starts
Oct 09 11:00:34 compute-2 ceph-mon[6044]: 4.e scrub ok
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 11:00:34 compute-2 ceph-mon[6044]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 11:00:34 compute-2 ceph-mon[6044]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 11:00:34 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 11:00:34 compute-2 ceph-mon[6044]: 5.a scrub starts
Oct 09 11:00:34 compute-2 ceph-mon[6044]: 5.a scrub ok
Oct 09 11:00:34 compute-2 ceph-mon[6044]: 4.8 scrub starts
Oct 09 11:00:34 compute-2 ceph-mon[6044]: 4.8 scrub ok
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 09 11:00:34 compute-2 ceph-mon[6044]: osdmap e50: 3 total, 3 up, 3 in
Oct 09 11:00:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 09 11:00:34 compute-2 sudo[14383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:34 compute-2 sudo[14383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14383]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 11:00:34 compute-2 sudo[14408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14408]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 11:00:34 compute-2 sudo[14433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14433]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14458]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:34 compute-2 sudo[14483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14483]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct 09 11:00:34 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct 09 11:00:34 compute-2 sudo[14508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14508]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14556]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14581]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:34 compute-2 sudo[14606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14606]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:34 compute-2 sudo[14631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14631]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:00:34 compute-2 sudo[14656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14656]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14681]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:34 compute-2 sudo[14706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14706]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14731]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct 09 11:00:34 compute-2 sudo[14779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14779]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:00:34 compute-2 sudo[14804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14804]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:34 compute-2 sudo[14829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:34 compute-2 sudo[14829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:34 compute-2 sudo[14829]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:35 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct 09 11:00:35 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct 09 11:00:35 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:35 compute-2 ceph-mon[6044]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:35 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:00:35 compute-2 ceph-mon[6044]: pgmap v7: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:00:35 compute-2 ceph-mon[6044]: 5.1 scrub starts
Oct 09 11:00:35 compute-2 ceph-mon[6044]: 5.1 scrub ok
Oct 09 11:00:35 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:35 compute-2 ceph-mon[6044]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:35 compute-2 ceph-mon[6044]: 5.c scrub starts
Oct 09 11:00:35 compute-2 ceph-mon[6044]: 5.c scrub ok
Oct 09 11:00:35 compute-2 ceph-mon[6044]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:00:35 compute-2 ceph-mon[6044]: 2.d scrub starts
Oct 09 11:00:35 compute-2 ceph-mon[6044]: 2.d scrub ok
Oct 09 11:00:35 compute-2 ceph-mon[6044]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 11:00:35 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 09 11:00:35 compute-2 ceph-mon[6044]: osdmap e51: 3 total, 3 up, 3 in
Oct 09 11:00:35 compute-2 sudo[14854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:35 compute-2 sudo[14854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:35 compute-2 sudo[14854]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:35 compute-2 sudo[14879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:35 compute-2 sudo[14879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:36 compute-2 systemd[1]: Reloading.
Oct 09 11:00:36 compute-2 systemd-sysv-generator[14974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:00:36 compute-2 systemd-rc-local-generator[14971]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:00:36 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Oct 09 11:00:36 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Oct 09 11:00:36 compute-2 systemd[1]: Reloading.
Oct 09 11:00:36 compute-2 systemd-rc-local-generator[15005]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:00:36 compute-2 systemd-sysv-generator[15009]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:00:36 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct 09 11:00:36 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:36 compute-2 ceph-mon[6044]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:36 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 5.f scrub starts
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 5.f scrub ok
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 2.19 scrub starts
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 2.19 scrub ok
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 2.15 scrub starts
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 2.15 scrub ok
Oct 09 11:00:36 compute-2 ceph-mon[6044]: mgrmap e25: compute-0.izrudc(active, since 5s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:36 compute-2 ceph-mon[6044]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 09 11:00:36 compute-2 ceph-mon[6044]: pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 4.c scrub starts
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 4.c scrub ok
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 3.18 scrub starts
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 3.18 scrub ok
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 6.12 scrub starts
Oct 09 11:00:36 compute-2 ceph-mon[6044]: 6.12 scrub ok
Oct 09 11:00:36 compute-2 ceph-mon[6044]: osdmap e52: 3 total, 3 up, 3 in
Oct 09 11:00:36 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:36 compute-2 systemd[1]: Starting Ceph node-exporter.compute-2 for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:00:36 compute-2 bash[15066]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 09 11:00:37 compute-2 bash[15066]: Getting image source signatures
Oct 09 11:00:37 compute-2 bash[15066]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 09 11:00:37 compute-2 bash[15066]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 09 11:00:37 compute-2 bash[15066]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 09 11:00:37 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 09 11:00:37 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 09 11:00:37 compute-2 ceph-mon[6044]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 09 11:00:37 compute-2 ceph-mon[6044]: mgrmap e26: compute-0.izrudc(active, since 6s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:00:37 compute-2 ceph-mon[6044]: 4.1a scrub starts
Oct 09 11:00:37 compute-2 ceph-mon[6044]: 4.1a scrub ok
Oct 09 11:00:37 compute-2 ceph-mon[6044]: 5.17 deep-scrub starts
Oct 09 11:00:37 compute-2 ceph-mon[6044]: 5.17 deep-scrub ok
Oct 09 11:00:37 compute-2 ceph-mon[6044]: 7.11 scrub starts
Oct 09 11:00:37 compute-2 ceph-mon[6044]: 7.11 scrub ok
Oct 09 11:00:37 compute-2 bash[15066]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 09 11:00:37 compute-2 bash[15066]: Writing manifest to image destination
Oct 09 11:00:37 compute-2 podman[15066]: 2025-10-09 11:00:37.90870388 +0000 UTC m=+1.114198812 container create 8d9bdd55b066c153b48497121c742a823a2cce56d0fdf8762dce99797205c9bc (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 11:00:37 compute-2 podman[15066]: 2025-10-09 11:00:37.894058987 +0000 UTC m=+1.099553939 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 09 11:00:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/169451bd4d728c8fc7146f53231746918488f12302105db93afa9e149e09d513/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:37 compute-2 podman[15066]: 2025-10-09 11:00:37.95544407 +0000 UTC m=+1.160939022 container init 8d9bdd55b066c153b48497121c742a823a2cce56d0fdf8762dce99797205c9bc (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 11:00:37 compute-2 podman[15066]: 2025-10-09 11:00:37.960257889 +0000 UTC m=+1.165752821 container start 8d9bdd55b066c153b48497121c742a823a2cce56d0fdf8762dce99797205c9bc (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 11:00:37 compute-2 bash[15066]: 8d9bdd55b066c153b48497121c742a823a2cce56d0fdf8762dce99797205c9bc
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.966Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.966Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.967Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.967Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=arp
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=bcache
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=bonding
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=cpu
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=dmi
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=edac
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=entropy
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=filefd
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=netclass
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=netdev
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=netstat
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=nfs
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=nvme
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=os
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=pressure
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=rapl
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=selinux
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=softnet
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=stat
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=textfile
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=time
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.968Z caller=node_exporter.go:117 level=info collector=uname
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.969Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.969Z caller=node_exporter.go:117 level=info collector=xfs
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.969Z caller=node_exporter.go:117 level=info collector=zfs
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.970Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 09 11:00:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2[15141]: ts=2025-10-09T11:00:37.970Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 09 11:00:37 compute-2 systemd[1]: Started Ceph node-exporter.compute-2 for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:00:38 compute-2 sudo[14879]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2984779474' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2984779474' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 09 11:00:38 compute-2 ceph-mon[6044]: pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 09 11:00:38 compute-2 ceph-mon[6044]: 4.1b scrub starts
Oct 09 11:00:38 compute-2 ceph-mon[6044]: 4.1b scrub ok
Oct 09 11:00:38 compute-2 ceph-mon[6044]: 2.e scrub starts
Oct 09 11:00:38 compute-2 ceph-mon[6044]: 2.e scrub ok
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 11:00:38 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:39 compute-2 ceph-mon[6044]: 4.18 scrub starts
Oct 09 11:00:39 compute-2 ceph-mon[6044]: 4.18 scrub ok
Oct 09 11:00:39 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/2454360937' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 11:00:39 compute-2 ceph-mon[6044]: 3.19 deep-scrub starts
Oct 09 11:00:39 compute-2 ceph-mon[6044]: 3.19 deep-scrub ok
Oct 09 11:00:40 compute-2 ceph-mon[6044]: pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 09 11:00:40 compute-2 ceph-mon[6044]: 3.1c scrub starts
Oct 09 11:00:40 compute-2 ceph-mon[6044]: 3.1c scrub ok
Oct 09 11:00:40 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:40 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3363350816' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 11:00:40 compute-2 ceph-mon[6044]: 5.14 scrub starts
Oct 09 11:00:40 compute-2 ceph-mon[6044]: 5.14 scrub ok
Oct 09 11:00:41 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:41 compute-2 ceph-mon[6044]: 5.18 scrub starts
Oct 09 11:00:41 compute-2 ceph-mon[6044]: 5.18 scrub ok
Oct 09 11:00:41 compute-2 ceph-mon[6044]: 3.17 scrub starts
Oct 09 11:00:41 compute-2 ceph-mon[6044]: 3.17 scrub ok
Oct 09 11:00:41 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3409242971' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 09 11:00:42 compute-2 sudo[15150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:42 compute-2 sudo[15150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:42 compute-2 sudo[15150]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:42 compute-2 sudo[15175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:42 compute-2 sudo[15175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:42 compute-2 ceph-mon[6044]: pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 09 11:00:42 compute-2 ceph-mon[6044]: 6.5 scrub starts
Oct 09 11:00:42 compute-2 ceph-mon[6044]: 6.5 scrub ok
Oct 09 11:00:42 compute-2 ceph-mon[6044]: 3.12 scrub starts
Oct 09 11:00:42 compute-2 ceph-mon[6044]: 3.12 scrub ok
Oct 09 11:00:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.brbiqj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 11:00:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.brbiqj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 11:00:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:43 compute-2 podman[15241]: 2025-10-09 11:00:43.166011223 +0000 UTC m=+0.034095766 container create e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 11:00:43 compute-2 systemd[1]: Started libpod-conmon-e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a.scope.
Oct 09 11:00:43 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:00:43 compute-2 podman[15241]: 2025-10-09 11:00:43.2334479 +0000 UTC m=+0.101532463 container init e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 09 11:00:43 compute-2 podman[15241]: 2025-10-09 11:00:43.239543476 +0000 UTC m=+0.107628019 container start e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 11:00:43 compute-2 podman[15241]: 2025-10-09 11:00:43.242213063 +0000 UTC m=+0.110297606 container attach e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 11:00:43 compute-2 charming_cray[15257]: 167 167
Oct 09 11:00:43 compute-2 systemd[1]: libpod-e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a.scope: Deactivated successfully.
Oct 09 11:00:43 compute-2 podman[15241]: 2025-10-09 11:00:43.245479417 +0000 UTC m=+0.113563960 container died e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 09 11:00:43 compute-2 podman[15241]: 2025-10-09 11:00:43.150731372 +0000 UTC m=+0.018815935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 11:00:43 compute-2 systemd[1]: var-lib-containers-storage-overlay-dfdf897ca8980b1ac2f0efbd90e764b8513de749d3f4eeffa08be0a1df72c06d-merged.mount: Deactivated successfully.
Oct 09 11:00:43 compute-2 podman[15241]: 2025-10-09 11:00:43.277188393 +0000 UTC m=+0.145272936 container remove e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 11:00:43 compute-2 systemd[1]: libpod-conmon-e2ba42b2ce0a2c6e3de6057169114d3cb5b325cec7a5be77dc3a265380eebc0a.scope: Deactivated successfully.
Oct 09 11:00:43 compute-2 systemd[1]: Reloading.
Oct 09 11:00:43 compute-2 systemd-rc-local-generator[15299]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:00:43 compute-2 systemd-sysv-generator[15303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:00:43 compute-2 systemd[1]: Reloading.
Oct 09 11:00:43 compute-2 systemd-rc-local-generator[15340]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:00:43 compute-2 systemd-sysv-generator[15345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:00:43 compute-2 systemd[1]: Starting Ceph mds.cephfs.compute-2.brbiqj for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:00:43 compute-2 ceph-mon[6044]: Deploying daemon mds.cephfs.compute-2.brbiqj on compute-2
Oct 09 11:00:43 compute-2 ceph-mon[6044]: 6.2 scrub starts
Oct 09 11:00:43 compute-2 ceph-mon[6044]: 6.2 scrub ok
Oct 09 11:00:43 compute-2 ceph-mon[6044]: 7.f scrub starts
Oct 09 11:00:43 compute-2 ceph-mon[6044]: 7.f scrub ok
Oct 09 11:00:43 compute-2 ceph-mon[6044]: from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 11:00:44 compute-2 podman[15399]: 2025-10-09 11:00:44.02979202 +0000 UTC m=+0.037989708 container create 8783d8a79cb640d22ed209216c4956862ba6080912df263620a09d615ba6d0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-2-brbiqj, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 09 11:00:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550bdadbc1031f348cb4b213db685afb2beb80e072e0cf807a10f9426c7882b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550bdadbc1031f348cb4b213db685afb2beb80e072e0cf807a10f9426c7882b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550bdadbc1031f348cb4b213db685afb2beb80e072e0cf807a10f9426c7882b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/550bdadbc1031f348cb4b213db685afb2beb80e072e0cf807a10f9426c7882b4/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.brbiqj supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:44 compute-2 podman[15399]: 2025-10-09 11:00:44.08654327 +0000 UTC m=+0.094740958 container init 8783d8a79cb640d22ed209216c4956862ba6080912df263620a09d615ba6d0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-2-brbiqj, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 11:00:44 compute-2 podman[15399]: 2025-10-09 11:00:44.091338858 +0000 UTC m=+0.099536546 container start 8783d8a79cb640d22ed209216c4956862ba6080912df263620a09d615ba6d0d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-2-brbiqj, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 11:00:44 compute-2 bash[15399]: 8783d8a79cb640d22ed209216c4956862ba6080912df263620a09d615ba6d0d9
Oct 09 11:00:44 compute-2 podman[15399]: 2025-10-09 11:00:44.011874193 +0000 UTC m=+0.020071901 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 11:00:44 compute-2 systemd[1]: Started Ceph mds.cephfs.compute-2.brbiqj for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:00:44 compute-2 ceph-mds[15418]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 11:00:44 compute-2 ceph-mds[15418]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 09 11:00:44 compute-2 ceph-mds[15418]: main not setting numa affinity
Oct 09 11:00:44 compute-2 ceph-mds[15418]: pidfile_write: ignore empty --pid-file
Oct 09 11:00:44 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-2-brbiqj[15414]: starting mds.cephfs.compute-2.brbiqj at 
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.cephfs.compute-2.brbiqj Updating MDS map to version 2 from mon.1
Oct 09 11:00:44 compute-2 sudo[15175]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:44 compute-2 ceph-mon[6044]: pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Oct 09 11:00:44 compute-2 ceph-mon[6044]: 6.3 deep-scrub starts
Oct 09 11:00:44 compute-2 ceph-mon[6044]: 6.3 deep-scrub ok
Oct 09 11:00:44 compute-2 ceph-mon[6044]: 7.9 scrub starts
Oct 09 11:00:44 compute-2 ceph-mon[6044]: 7.9 scrub ok
Oct 09 11:00:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aesial", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 11:00:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.aesial", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 11:00:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:44 compute-2 ceph-mon[6044]: Deploying daemon mds.cephfs.compute-0.aesial on compute-0
Oct 09 11:00:44 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e3 new map
Oct 09 11:00:44 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e3 print_map
                                          e3
                                          btime 2025-10-09T11:00:44:961012+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        2
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:30.843558+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        
                                          up        {}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-2.brbiqj{-1:24211} state up:standby seq 1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.cephfs.compute-2.brbiqj Updating MDS map to version 3 from mon.1
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.cephfs.compute-2.brbiqj Monitors have assigned me to become a standby
Oct 09 11:00:44 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e4 new map
Oct 09 11:00:44 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e4 print_map
                                          e4
                                          btime 2025-10-09T11:00:44:984626+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        4
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:44.984620+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=24211}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                          [mds.cephfs.compute-2.brbiqj{0:24211} state up:creating seq 1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.cephfs.compute-2.brbiqj Updating MDS map to version 4 from mon.1
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.4 handle_mds_map I am now mds.0.4
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x1
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x100
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x600
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x601
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x602
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x603
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x604
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x605
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x606
Oct 09 11:00:44 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x607
Oct 09 11:00:45 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x608
Oct 09 11:00:45 compute-2 ceph-mds[15418]: mds.0.cache creating system inode with ino:0x609
Oct 09 11:00:45 compute-2 ceph-mds[15418]: mds.0.4 creating_done
Oct 09 11:00:45 compute-2 ceph-mon[6044]: 6.d scrub starts
Oct 09 11:00:45 compute-2 ceph-mon[6044]: 6.d scrub ok
Oct 09 11:00:45 compute-2 ceph-mon[6044]: mds.? [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] up:boot
Oct 09 11:00:45 compute-2 ceph-mon[6044]: daemon mds.cephfs.compute-2.brbiqj assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 09 11:00:45 compute-2 ceph-mon[6044]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 09 11:00:45 compute-2 ceph-mon[6044]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 09 11:00:45 compute-2 ceph-mon[6044]: Cluster is now healthy
Oct 09 11:00:45 compute-2 ceph-mon[6044]: fsmap cephfs:0 1 up:standby
Oct 09 11:00:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.brbiqj"}]: dispatch
Oct 09 11:00:45 compute-2 ceph-mon[6044]: fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:creating}
Oct 09 11:00:45 compute-2 ceph-mon[6044]: daemon mds.cephfs.compute-2.brbiqj is now active in filesystem cephfs as rank 0
Oct 09 11:00:45 compute-2 ceph-mon[6044]: 7.b scrub starts
Oct 09 11:00:45 compute-2 ceph-mon[6044]: 7.b scrub ok
Oct 09 11:00:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.yzkqil", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 11:00:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.yzkqil", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 11:00:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e5 new map
Oct 09 11:00:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e5 print_map
                                          e5
                                          btime 2025-10-09T11:00:45:996712+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:45.996709+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=24211}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 24211 members: 24211
                                          [mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 11:00:46 compute-2 ceph-mds[15418]: mds.cephfs.compute-2.brbiqj Updating MDS map to version 5 from mon.1
Oct 09 11:00:46 compute-2 ceph-mds[15418]: mds.0.4 handle_mds_map I am now mds.0.4
Oct 09 11:00:46 compute-2 ceph-mds[15418]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 09 11:00:46 compute-2 ceph-mds[15418]: mds.0.4 recovery_done -- successful recovery!
Oct 09 11:00:46 compute-2 ceph-mds[15418]: mds.0.4 active_start
Oct 09 11:00:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e6 new map
Oct 09 11:00:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e6 print_map
                                          e6
                                          btime 2025-10-09T11:00:46:009915+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:45.996709+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=24211}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 24211 members: 24211
                                          [mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 11:00:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:47 compute-2 ceph-mon[6044]: from='client.14559 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 11:00:47 compute-2 ceph-mon[6044]: Deploying daemon mds.cephfs.compute-1.yzkqil on compute-1
Oct 09 11:00:47 compute-2 ceph-mon[6044]: pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 09 11:00:47 compute-2 ceph-mon[6044]: 6.e deep-scrub starts
Oct 09 11:00:47 compute-2 ceph-mon[6044]: 6.e deep-scrub ok
Oct 09 11:00:47 compute-2 ceph-mon[6044]: mds.? [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] up:active
Oct 09 11:00:47 compute-2 ceph-mon[6044]: mds.? [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] up:boot
Oct 09 11:00:47 compute-2 ceph-mon[6044]: fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 1 up:standby
Oct 09 11:00:47 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.aesial"}]: dispatch
Oct 09 11:00:47 compute-2 ceph-mon[6044]: fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 1 up:standby
Oct 09 11:00:47 compute-2 ceph-mon[6044]: 7.10 scrub starts
Oct 09 11:00:47 compute-2 ceph-mon[6044]: 7.10 scrub ok
Oct 09 11:00:48 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e7 new map
Oct 09 11:00:48 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e7 print_map
                                          e7
                                          btime 2025-10-09T11:00:48:014896+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:45.996709+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=24211}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 24211 members: 24211
                                          [mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 2 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.yzkqil{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 11:00:48 compute-2 ceph-mon[6044]: 6.19 scrub starts
Oct 09 11:00:48 compute-2 ceph-mon[6044]: 6.19 scrub ok
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='client.14571 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 11:00:48 compute-2 ceph-mon[6044]: 7.13 scrub starts
Oct 09 11:00:48 compute-2 ceph-mon[6044]: 7.13 scrub ok
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 11:00:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:49 compute-2 ceph-mon[6044]: Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz
Oct 09 11:00:49 compute-2 ceph-mon[6044]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 09 11:00:49 compute-2 ceph-mon[6044]: Rados config object exists: conf-nfs.cephfs
Oct 09 11:00:49 compute-2 ceph-mon[6044]: Creating key for client.nfs.cephfs.0.0.compute-1.cjtqwz-rgw
Oct 09 11:00:49 compute-2 ceph-mon[6044]: Bind address in nfs.cephfs.0.0.compute-1.cjtqwz's ganesha conf is defaulting to empty
Oct 09 11:00:49 compute-2 ceph-mon[6044]: Deploying daemon nfs.cephfs.0.0.compute-1.cjtqwz on compute-1
Oct 09 11:00:49 compute-2 ceph-mon[6044]: pgmap v16: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Oct 09 11:00:49 compute-2 ceph-mon[6044]: 6.1a scrub starts
Oct 09 11:00:49 compute-2 ceph-mon[6044]: 6.1a scrub ok
Oct 09 11:00:49 compute-2 ceph-mon[6044]: mds.? [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] up:boot
Oct 09 11:00:49 compute-2 ceph-mon[6044]: fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 2 up:standby
Oct 09 11:00:49 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.yzkqil"}]: dispatch
Oct 09 11:00:49 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e8 new map
Oct 09 11:00:49 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e8 print_map
                                          e8
                                          btime 2025-10-09T11:00:49:727133+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        8
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:49.018608+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=24211}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 24211 members: 24211
                                          [mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.yzkqil{-1:24176} state up:standby seq 1 addr [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 11:00:49 compute-2 ceph-mds[15418]: mds.cephfs.compute-2.brbiqj Updating MDS map to version 8 from mon.1
Oct 09 11:00:50 compute-2 ceph-mds[15418]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 09 11:00:50 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mds-cephfs-compute-2-brbiqj[15414]: 2025-10-09T11:00:50.000+0000 7f5b83872640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='client.14592 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:50 compute-2 ceph-mon[6044]: mds.? [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] up:active
Oct 09 11:00:50 compute-2 ceph-mon[6044]: mds.? [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] up:standby
Oct 09 11:00:50 compute-2 ceph-mon[6044]: fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 2 up:standby
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:50 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1248766397' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 11:00:51 compute-2 ceph-mon[6044]: Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg
Oct 09 11:00:51 compute-2 ceph-mon[6044]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 09 11:00:51 compute-2 ceph-mon[6044]: pgmap v17: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Oct 09 11:00:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e9 new map
Oct 09 11:00:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).mds e9 print_map
                                          e9
                                          btime 2025-10-09T11:00:51:777283+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        8
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T11:00:30.843558+0000
                                          modified        2025-10-09T11:00:49.018608+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=24211}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 24211 members: 24211
                                          [mds.cephfs.compute-2.brbiqj{0:24211} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/374421123,v1:192.168.122.102:6805/374421123] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.aesial{-1:14565} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3689360911,v1:192.168.122.100:6807/3689360911] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.yzkqil{-1:24176} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 11:00:52 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/1340127821' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 09 11:00:52 compute-2 ceph-mon[6044]: mds.? [v2:192.168.122.101:6804/1461969113,v1:192.168.122.101:6805/1461969113] up:standby
Oct 09 11:00:52 compute-2 ceph-mon[6044]: fsmap cephfs:1 {0=cephfs.compute-2.brbiqj=up:active} 2 up:standby
Oct 09 11:00:52 compute-2 ceph-mon[6044]: pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Oct 09 11:00:54 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 11:00:54 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3940324292' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 09 11:00:54 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 11:00:54 compute-2 ceph-mon[6044]: Rados config object exists: conf-nfs.cephfs
Oct 09 11:00:54 compute-2 ceph-mon[6044]: Creating key for client.nfs.cephfs.1.0.compute-2.mtmthg-rgw
Oct 09 11:00:54 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 11:00:54 compute-2 sudo[15451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:00:54 compute-2 sudo[15451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:54 compute-2 sudo[15451]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:54 compute-2 sudo[15476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:00:54 compute-2 sudo[15476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:00:54 compute-2 podman[15541]: 2025-10-09 11:00:54.665618028 +0000 UTC m=+0.066167362 container create f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 09 11:00:54 compute-2 systemd[1]: Started libpod-conmon-f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895.scope.
Oct 09 11:00:54 compute-2 podman[15541]: 2025-10-09 11:00:54.618199998 +0000 UTC m=+0.018749352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 11:00:54 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:00:54 compute-2 podman[15541]: 2025-10-09 11:00:54.745086523 +0000 UTC m=+0.145635877 container init f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 09 11:00:54 compute-2 podman[15541]: 2025-10-09 11:00:54.751273602 +0000 UTC m=+0.151822936 container start f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_proskuriakova, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 11:00:54 compute-2 podman[15541]: 2025-10-09 11:00:54.754428313 +0000 UTC m=+0.154977667 container attach f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_proskuriakova, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Oct 09 11:00:54 compute-2 infallible_proskuriakova[15557]: 167 167
Oct 09 11:00:54 compute-2 systemd[1]: libpod-f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895.scope: Deactivated successfully.
Oct 09 11:00:54 compute-2 podman[15541]: 2025-10-09 11:00:54.756901914 +0000 UTC m=+0.157451248 container died f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 09 11:00:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-ba3a5d1eba06307de7a1c60a691ff3d8af4e2ed1d34269c85339ed48857a78d7-merged.mount: Deactivated successfully.
Oct 09 11:00:54 compute-2 podman[15541]: 2025-10-09 11:00:54.79275964 +0000 UTC m=+0.193308974 container remove f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_proskuriakova, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 11:00:54 compute-2 systemd[1]: libpod-conmon-f689bfdf69332de3682618074aeaeeac8c75409ce0877dec0168795892ae3895.scope: Deactivated successfully.
Oct 09 11:00:54 compute-2 systemd[1]: Reloading.
Oct 09 11:00:54 compute-2 systemd-rc-local-generator[15599]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:00:54 compute-2 systemd-sysv-generator[15603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:00:55 compute-2 ceph-mon[6044]: pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Oct 09 11:00:55 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.mtmthg-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 11:00:55 compute-2 ceph-mon[6044]: Bind address in nfs.cephfs.1.0.compute-2.mtmthg's ganesha conf is defaulting to empty
Oct 09 11:00:55 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:55 compute-2 ceph-mon[6044]: Deploying daemon nfs.cephfs.1.0.compute-2.mtmthg on compute-2
Oct 09 11:00:55 compute-2 ceph-mon[6044]: from='client.? 192.168.122.100:0/3893144808' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 09 11:00:55 compute-2 systemd[1]: Reloading.
Oct 09 11:00:55 compute-2 systemd-rc-local-generator[15640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:00:55 compute-2 systemd-sysv-generator[15645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:00:55 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.mtmthg for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:00:55 compute-2 podman[15697]: 2025-10-09 11:00:55.585199397 +0000 UTC m=+0.035466675 container create 3a0bd741e4541c187a1e61cce89985f778cdfc381a00a44b979d769c6e0718a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Oct 09 11:00:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2b54dbe89639ff1dedcc3543c05d3a699f39f10b9ef24c9328de38e5159e27/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2b54dbe89639ff1dedcc3543c05d3a699f39f10b9ef24c9328de38e5159e27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2b54dbe89639ff1dedcc3543c05d3a699f39f10b9ef24c9328de38e5159e27/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:55 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b2b54dbe89639ff1dedcc3543c05d3a699f39f10b9ef24c9328de38e5159e27/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.mtmthg-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 11:00:55 compute-2 podman[15697]: 2025-10-09 11:00:55.631660439 +0000 UTC m=+0.081927747 container init 3a0bd741e4541c187a1e61cce89985f778cdfc381a00a44b979d769c6e0718a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 09 11:00:55 compute-2 podman[15697]: 2025-10-09 11:00:55.639255799 +0000 UTC m=+0.089523077 container start 3a0bd741e4541c187a1e61cce89985f778cdfc381a00a44b979d769c6e0718a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 11:00:55 compute-2 bash[15697]: 3a0bd741e4541c187a1e61cce89985f778cdfc381a00a44b979d769c6e0718a5
Oct 09 11:00:55 compute-2 podman[15697]: 2025-10-09 11:00:55.569989018 +0000 UTC m=+0.020256316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 11:00:55 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.mtmthg for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 11:00:55 compute-2 sudo[15476]: pam_unix(sudo:session): session closed for user root
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000002:nfs.cephfs.1: -2
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 11:00:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:55 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 11:00:56 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:56 compute-2 ceph-mon[6044]: Creating key for client.nfs.cephfs.2.0.compute-0.akqbal
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 11:00:56 compute-2 ceph-mon[6044]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:56 compute-2 ceph-mon[6044]: pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 11:00:56 compute-2 ceph-mon[6044]: Rados config object exists: conf-nfs.cephfs
Oct 09 11:00:56 compute-2 ceph-mon[6044]: Creating key for client.nfs.cephfs.2.0.compute-0.akqbal-rgw
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.akqbal-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 11:00:56 compute-2 ceph-mon[6044]: Bind address in nfs.cephfs.2.0.compute-0.akqbal's ganesha conf is defaulting to empty
Oct 09 11:00:56 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:00:56 compute-2 ceph-mon[6044]: Deploying daemon nfs.cephfs.2.0.compute-0.akqbal on compute-0
Oct 09 11:00:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:57 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 11:00:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:57 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 11:00:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:57 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 11:00:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:57 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 11:00:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:00:57 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 11:00:58 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:58 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:58 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:58 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:58 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:00:58 compute-2 ceph-mon[6044]: Deploying daemon haproxy.nfs.cephfs.compute-1.thyuoj on compute-1
Oct 09 11:00:58 compute-2 ceph-mon[6044]: pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 2.7 KiB/s wr, 8 op/s
Oct 09 11:01:01 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:01 compute-2 ceph-mon[6044]: pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Oct 09 11:01:01 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:01 compute-2 CROND[15767]: (root) CMD (run-parts /etc/cron.hourly)
Oct 09 11:01:01 compute-2 run-parts[15770]: (/etc/cron.hourly) starting 0anacron
Oct 09 11:01:01 compute-2 anacron[15778]: Anacron started on 2025-10-09
Oct 09 11:01:01 compute-2 anacron[15778]: Will run job `cron.daily' in 22 min.
Oct 09 11:01:01 compute-2 anacron[15778]: Will run job `cron.weekly' in 42 min.
Oct 09 11:01:01 compute-2 anacron[15778]: Will run job `cron.monthly' in 62 min.
Oct 09 11:01:01 compute-2 anacron[15778]: Jobs will be executed sequentially
Oct 09 11:01:01 compute-2 run-parts[15780]: (/etc/cron.hourly) finished 0anacron
Oct 09 11:01:01 compute-2 CROND[15766]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 09 11:01:02 compute-2 ceph-mon[6044]: pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.6 KiB/s wr, 9 op/s
Oct 09 11:01:04 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:04 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3708000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:04 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:04 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:04 compute-2 ceph-mon[6044]: pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 09 11:01:04 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:04 compute-2 ceph-mon[6044]: Deploying daemon haproxy.nfs.cephfs.compute-0.zhclxd on compute-0
Oct 09 11:01:06 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:06 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36fc001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:06 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:06 compute-2 ceph-mon[6044]: pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 09 11:01:08 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:08 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e4000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:08 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:08 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:08 compute-2 sudo[15785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:01:08 compute-2 sudo[15785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:08 compute-2 sudo[15785]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:08 compute-2 sudo[15810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:01:08 compute-2 sudo[15810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:08 compute-2 ceph-mon[6044]: pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 09 11:01:08 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:08 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:08 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:08 compute-2 ceph-mon[6044]: Deploying daemon haproxy.nfs.cephfs.compute-2.xqfbnl on compute-2
Oct 09 11:01:10 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:10 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36ec000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:10 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:10 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:10 compute-2 ceph-mon[6044]: pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 09 11:01:11 compute-2 podman[15875]: 2025-10-09 11:01:11.266537633 +0000 UTC m=+2.599236913 container create 029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db (image=quay.io/ceph/haproxy:2.3, name=infallible_hamilton)
Oct 09 11:01:11 compute-2 podman[15875]: 2025-10-09 11:01:11.24750534 +0000 UTC m=+2.580204650 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 11:01:11 compute-2 systemd[1]: Started libpod-conmon-029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db.scope.
Oct 09 11:01:11 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:01:11 compute-2 podman[15875]: 2025-10-09 11:01:11.352418285 +0000 UTC m=+2.685117585 container init 029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db (image=quay.io/ceph/haproxy:2.3, name=infallible_hamilton)
Oct 09 11:01:11 compute-2 podman[15875]: 2025-10-09 11:01:11.359980361 +0000 UTC m=+2.692679631 container start 029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db (image=quay.io/ceph/haproxy:2.3, name=infallible_hamilton)
Oct 09 11:01:11 compute-2 podman[15875]: 2025-10-09 11:01:11.363152073 +0000 UTC m=+2.695851343 container attach 029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db (image=quay.io/ceph/haproxy:2.3, name=infallible_hamilton)
Oct 09 11:01:11 compute-2 infallible_hamilton[15989]: 0 0
Oct 09 11:01:11 compute-2 systemd[1]: libpod-029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db.scope: Deactivated successfully.
Oct 09 11:01:11 compute-2 podman[15875]: 2025-10-09 11:01:11.366043287 +0000 UTC m=+2.698742557 container died 029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db (image=quay.io/ceph/haproxy:2.3, name=infallible_hamilton)
Oct 09 11:01:11 compute-2 systemd[1]: var-lib-containers-storage-overlay-283603eb0cdd01c8ff7b90dda5071a53976726f895e2bd66b69a09abdc54aac8-merged.mount: Deactivated successfully.
Oct 09 11:01:11 compute-2 podman[15875]: 2025-10-09 11:01:11.397881011 +0000 UTC m=+2.730580281 container remove 029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db (image=quay.io/ceph/haproxy:2.3, name=infallible_hamilton)
Oct 09 11:01:11 compute-2 systemd[1]: libpod-conmon-029dc947a5b67292fefec5d8eb9181216c04c528a643ce73c2db9708303da5db.scope: Deactivated successfully.
Oct 09 11:01:11 compute-2 systemd[1]: Reloading.
Oct 09 11:01:11 compute-2 systemd-rc-local-generator[16040]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:11 compute-2 systemd-sysv-generator[16043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:11 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:11 compute-2 systemd[1]: Reloading.
Oct 09 11:01:11 compute-2 systemd-rc-local-generator[16079]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:11 compute-2 systemd-sysv-generator[16082]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:12 compute-2 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-2.xqfbnl for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:01:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:12 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36fc002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:12 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:12 compute-2 podman[16136]: 2025-10-09 11:01:12.19355532 +0000 UTC m=+0.036181647 container create f5196a93bddafce6b5ffdebf0b48b62f48b665aea938bf6384b229ceeb5d071c (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl)
Oct 09 11:01:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71896cb8a76a9297438e15270766de23477d1b3d602a4534685b08d3e9eb70bd/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:12 compute-2 podman[16136]: 2025-10-09 11:01:12.244000366 +0000 UTC m=+0.086626713 container init f5196a93bddafce6b5ffdebf0b48b62f48b665aea938bf6384b229ceeb5d071c (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl)
Oct 09 11:01:12 compute-2 podman[16136]: 2025-10-09 11:01:12.248386999 +0000 UTC m=+0.091013326 container start f5196a93bddafce6b5ffdebf0b48b62f48b665aea938bf6384b229ceeb5d071c (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl)
Oct 09 11:01:12 compute-2 bash[16136]: f5196a93bddafce6b5ffdebf0b48b62f48b665aea938bf6384b229ceeb5d071c
Oct 09 11:01:12 compute-2 podman[16136]: 2025-10-09 11:01:12.178291705 +0000 UTC m=+0.020918052 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 11:01:12 compute-2 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-2.xqfbnl for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:01:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl[16151]: [NOTICE] 281/110112 (2) : New worker #1 (4) forked
Oct 09 11:01:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl[16151]: [WARNING] 281/110112 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 11:01:12 compute-2 sudo[15810]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:12 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36ec001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:13 compute-2 ceph-mon[6044]: pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 09 11:01:13 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:13 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:13 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:13 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:13 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 11:01:13 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 11:01:13 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 11:01:13 compute-2 ceph-mon[6044]: Deploying daemon keepalived.nfs.cephfs.compute-0.wkoquj on compute-0
Oct 09 11:01:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:14 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0001b40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:14 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36fc002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:14 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:15 compute-2 ceph-mon[6044]: pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 09 11:01:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:16 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e40016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:16 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36d8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:16 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:16 compute-2 sudo[16165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:01:16 compute-2 sudo[16165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:16 compute-2 sudo[16165]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:16 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36ec001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:16 compute-2 sudo[16190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:01:16 compute-2 sudo[16190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:17 compute-2 ceph-mon[6044]: pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 09 11:01:17 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:17 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:17 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:18 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0001b40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:18 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0001b40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:18 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 11:01:18 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 11:01:18 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 11:01:18 compute-2 ceph-mon[6044]: Deploying daemon keepalived.nfs.cephfs.compute-2.dxpkeo on compute-2
Oct 09 11:01:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:18 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0001b40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:19 compute-2 ceph-mon[6044]: pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 09 11:01:19 compute-2 podman[16256]: 2025-10-09 11:01:19.83771143 +0000 UTC m=+2.575326073 container create 09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641 (image=quay.io/ceph/keepalived:2.2.4, name=dazzling_ellis, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, name=keepalived, version=2.2.4, build-date=2023-02-22T09:23:20)
Oct 09 11:01:19 compute-2 systemd[1338]: Created slice User Background Tasks Slice.
Oct 09 11:01:19 compute-2 systemd[1338]: Starting Cleanup of User's Temporary Files and Directories...
Oct 09 11:01:19 compute-2 systemd[1]: Started libpod-conmon-09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641.scope.
Oct 09 11:01:19 compute-2 systemd[1338]: Finished Cleanup of User's Temporary Files and Directories.
Oct 09 11:01:19 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:01:19 compute-2 podman[16256]: 2025-10-09 11:01:19.823460171 +0000 UTC m=+2.561074844 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 11:01:19 compute-2 podman[16256]: 2025-10-09 11:01:19.887431357 +0000 UTC m=+2.625046000 container init 09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641 (image=quay.io/ceph/keepalived:2.2.4, name=dazzling_ellis, io.openshift.tags=Ceph keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 09 11:01:19 compute-2 podman[16256]: 2025-10-09 11:01:19.894635813 +0000 UTC m=+2.632250456 container start 09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641 (image=quay.io/ceph/keepalived:2.2.4, name=dazzling_ellis, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-type=git, release=1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, name=keepalived, distribution-scope=public, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., description=keepalived for Ceph)
Oct 09 11:01:19 compute-2 podman[16256]: 2025-10-09 11:01:19.897261861 +0000 UTC m=+2.634876504 container attach 09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641 (image=quay.io/ceph/keepalived:2.2.4, name=dazzling_ellis, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, version=2.2.4, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=Ceph keepalived, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Oct 09 11:01:19 compute-2 dazzling_ellis[16353]: 0 0
Oct 09 11:01:19 compute-2 systemd[1]: libpod-09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641.scope: Deactivated successfully.
Oct 09 11:01:19 compute-2 conmon[16353]: conmon 09f48b8ea021f8fe0540 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641.scope/container/memory.events
Oct 09 11:01:19 compute-2 podman[16256]: 2025-10-09 11:01:19.900112565 +0000 UTC m=+2.637727218 container died 09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641 (image=quay.io/ceph/keepalived:2.2.4, name=dazzling_ellis, com.redhat.component=keepalived-container, description=keepalived for Ceph, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-type=git, vendor=Red Hat, Inc., release=1793, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct 09 11:01:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-75032b08665e53696d06532049671c9cac071805629061fe1861317b3a0f949a-merged.mount: Deactivated successfully.
Oct 09 11:01:19 compute-2 podman[16256]: 2025-10-09 11:01:19.930786809 +0000 UTC m=+2.668401452 container remove 09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641 (image=quay.io/ceph/keepalived:2.2.4, name=dazzling_ellis, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, release=1793, description=keepalived for Ceph)
Oct 09 11:01:19 compute-2 systemd[1]: libpod-conmon-09f48b8ea021f8fe0540d6421d85eb47fd1be7c34972aa474953a655303e4641.scope: Deactivated successfully.
Oct 09 11:01:19 compute-2 systemd[1]: Reloading.
Oct 09 11:01:20 compute-2 systemd-rc-local-generator[16399]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:20 compute-2 systemd-sysv-generator[16404]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:20 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36ec001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:20 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0001b40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:20 compute-2 systemd[1]: Reloading.
Oct 09 11:01:20 compute-2 systemd-sysv-generator[16445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:20 compute-2 systemd-rc-local-generator[16442]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:20 compute-2 ceph-mon[6044]: pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 09 11:01:20 compute-2 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-2.dxpkeo for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:01:20 compute-2 podman[16498]: 2025-10-09 11:01:20.776567405 +0000 UTC m=+0.051859923 container create 9aa2875443b46c413f7626fcce46cb8de75b717a4e55ac6da1ab94878ff712a2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 09 11:01:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1916fcb5a5d6bbcec6a644e828dd33978ef7af44157dd3d477532e3ad2264feb/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:20 compute-2 podman[16498]: 2025-10-09 11:01:20.836938098 +0000 UTC m=+0.112230676 container init 9aa2875443b46c413f7626fcce46cb8de75b717a4e55ac6da1ab94878ff712a2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, description=keepalived for Ceph, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, distribution-scope=public, name=keepalived, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64)
Oct 09 11:01:20 compute-2 podman[16498]: 2025-10-09 11:01:20.841940447 +0000 UTC m=+0.117232985 container start 9aa2875443b46c413f7626fcce46cb8de75b717a4e55ac6da1ab94878ff712a2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 11:01:20 compute-2 bash[16498]: 9aa2875443b46c413f7626fcce46cb8de75b717a4e55ac6da1ab94878ff712a2
Oct 09 11:01:20 compute-2 podman[16498]: 2025-10-09 11:01:20.761109925 +0000 UTC m=+0.036402453 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 11:01:20 compute-2 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-2.dxpkeo for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: Starting VRRP child process, pid=4
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: Startup complete
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: (VI_0) Entering BACKUP STATE (init)
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:20 2025: VRRP_Script(check_backend) succeeded
Oct 09 11:01:20 compute-2 sudo[16190]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:20 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0001b40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:21 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:22 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:22 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:22 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:22 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 11:01:22 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 11:01:22 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 11:01:22 compute-2 ceph-mon[6044]: Deploying daemon keepalived.nfs.cephfs.compute-1.ymbnot on compute-1
Oct 09 11:01:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:22 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0001b40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:22 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e4002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:22 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36d80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:23 compute-2 ceph-mon[6044]: pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 09 11:01:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:24 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36ec002f50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:24 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e0003720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:24 2025: (VI_0) Entering MASTER STATE
Oct 09 11:01:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:24 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct 09 11:01:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:24 2025: (VI_0) Entering BACKUP STATE
Oct 09 11:01:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:24 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 11:01:24 compute-2 kernel: ganesha.nfsd[15781]: segfault at 50 ip 00007f37baacc32e sp 00007f377bffe210 error 4 in libntirpc.so.5.8[7f37baab1000+2c000] likely on CPU 6 (core 0, socket 6)
Oct 09 11:01:24 compute-2 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 11:01:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[15712]: 09/10/2025 11:01:24 : epoch 68e795e7 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36e4003430 fd 37 proxy ignored for local
Oct 09 11:01:24 compute-2 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 09 11:01:24 compute-2 systemd[1]: Started Process Core Dump (PID 16523/UID 0).
Oct 09 11:01:25 compute-2 ceph-mon[6044]: pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 11:01:26 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:26 compute-2 systemd-coredump[16524]: Process 15716 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 52:
                                                   #0  0x00007f37baacc32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 11:01:26 compute-2 ceph-mon[6044]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 11:01:26 compute-2 systemd[1]: systemd-coredump@0-16523-0.service: Deactivated successfully.
Oct 09 11:01:26 compute-2 systemd[1]: systemd-coredump@0-16523-0.service: Consumed 1.148s CPU time.
Oct 09 11:01:26 compute-2 podman[16532]: 2025-10-09 11:01:26.761416217 +0000 UTC m=+0.021153859 container died 3a0bd741e4541c187a1e61cce89985f778cdfc381a00a44b979d769c6e0718a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 11:01:26 compute-2 systemd[1]: var-lib-containers-storage-overlay-6b2b54dbe89639ff1dedcc3543c05d3a699f39f10b9ef24c9328de38e5159e27-merged.mount: Deactivated successfully.
Oct 09 11:01:26 compute-2 podman[16532]: 2025-10-09 11:01:26.795678903 +0000 UTC m=+0.055416515 container remove 3a0bd741e4541c187a1e61cce89985f778cdfc381a00a44b979d769c6e0718a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 11:01:26 compute-2 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@nfs.cephfs.1.0.compute-2.mtmthg.service: Main process exited, code=exited, status=139/n/a
Oct 09 11:01:26 compute-2 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@nfs.cephfs.1.0.compute-2.mtmthg.service: Failed with result 'exit-code'.
Oct 09 11:01:26 compute-2 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@nfs.cephfs.1.0.compute-2.mtmthg.service: Consumed 1.380s CPU time.
Oct 09 11:01:27 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:27 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:27 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:27 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:27 compute-2 ceph-mon[6044]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 09 11:01:28 compute-2 ceph-mon[6044]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 09 11:01:30 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct 09 11:01:30 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 11:01:30 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl[16151]: [WARNING] 281/110130 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 11:01:31 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct 09 11:01:31 compute-2 ceph-mon[6044]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: osdmap e53: 3 total, 3 up, 3 in
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: Regenerating cephadm self-signed grafana TLS certificates
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 09 11:01:31 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:31 compute-2 ceph-mon[6044]: Deploying daemon grafana.compute-0 on compute-0
Oct 09 11:01:31 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:32 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 09 11:01:32 compute-2 ceph-mon[6044]: osdmap e54: 3 total, 3 up, 3 in
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 11:01:32 compute-2 ceph-mon[6044]: osdmap e55: 3 total, 3 up, 3 in
Oct 09 11:01:32 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 11:01:33 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct 09 11:01:33 compute-2 ceph-mon[6044]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 09 11:01:33 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 09 11:01:33 compute-2 ceph-mon[6044]: osdmap e56: 3 total, 3 up, 3 in
Oct 09 11:01:33 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 11:01:34 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct 09 11:01:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:34 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:35 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct 09 11:01:35 compute-2 ceph-mon[6044]: pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 09 11:01:35 compute-2 ceph-mon[6044]: 9.14 scrub starts
Oct 09 11:01:35 compute-2 ceph-mon[6044]: 9.14 scrub ok
Oct 09 11:01:35 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 09 11:01:35 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 11:01:35 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 11:01:35 compute-2 ceph-mon[6044]: osdmap e57: 3 total, 3 up, 3 in
Oct 09 11:01:35 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:35 compute-2 ceph-mon[6044]: osdmap e58: 3 total, 3 up, 3 in
Oct 09 11:01:36 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct 09 11:01:36 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl[16151]: [WARNING] 281/110136 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 11:01:36 compute-2 ceph-mon[6044]: 8.14 scrub starts
Oct 09 11:01:36 compute-2 ceph-mon[6044]: 8.14 scrub ok
Oct 09 11:01:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:36 compute-2 ceph-mon[6044]: 8.16 scrub starts
Oct 09 11:01:36 compute-2 ceph-mon[6044]: 8.16 scrub ok
Oct 09 11:01:36 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 11:01:36 compute-2 ceph-mon[6044]: osdmap e59: 3 total, 3 up, 3 in
Oct 09 11:01:36 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:37 compute-2 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@nfs.cephfs.1.0.compute-2.mtmthg.service: Scheduled restart job, restart counter is at 1.
Oct 09 11:01:37 compute-2 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.mtmthg for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:01:37 compute-2 systemd[1]: ceph-e990987d-9393-5e96-99ae-9e3a3319f191@nfs.cephfs.1.0.compute-2.mtmthg.service: Consumed 1.380s CPU time.
Oct 09 11:01:37 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.mtmthg for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:01:37 compute-2 podman[16627]: 2025-10-09 11:01:37.189805375 +0000 UTC m=+0.038248470 container create d4cc332abe71f9af4b878796d80d8f9b834be70c9e29ff5af33b17e3e230f3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 11:01:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a455968e223d9ecb1dc516420b23f3254c61e01a59fb80edbf71e647227be2b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a455968e223d9ecb1dc516420b23f3254c61e01a59fb80edbf71e647227be2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a455968e223d9ecb1dc516420b23f3254c61e01a59fb80edbf71e647227be2b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:37 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a455968e223d9ecb1dc516420b23f3254c61e01a59fb80edbf71e647227be2b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.mtmthg-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:37 compute-2 podman[16627]: 2025-10-09 11:01:37.24563389 +0000 UTC m=+0.094076985 container init d4cc332abe71f9af4b878796d80d8f9b834be70c9e29ff5af33b17e3e230f3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Oct 09 11:01:37 compute-2 podman[16627]: 2025-10-09 11:01:37.250989649 +0000 UTC m=+0.099432744 container start d4cc332abe71f9af4b878796d80d8f9b834be70c9e29ff5af33b17e3e230f3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 11:01:37 compute-2 bash[16627]: d4cc332abe71f9af4b878796d80d8f9b834be70c9e29ff5af33b17e3e230f3c5
Oct 09 11:01:37 compute-2 podman[16627]: 2025-10-09 11:01:37.172675692 +0000 UTC m=+0.021118807 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 11:01:37 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.mtmthg for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 11:01:37 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 11:01:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:37 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 11:01:37 compute-2 ceph-mon[6044]: pgmap v46: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:01:37 compute-2 ceph-mon[6044]: 10.15 scrub starts
Oct 09 11:01:37 compute-2 ceph-mon[6044]: 10.15 scrub ok
Oct 09 11:01:37 compute-2 ceph-mon[6044]: 9.15 deep-scrub starts
Oct 09 11:01:37 compute-2 ceph-mon[6044]: 9.15 deep-scrub ok
Oct 09 11:01:37 compute-2 ceph-mon[6044]: osdmap e60: 3 total, 3 up, 3 in
Oct 09 11:01:37 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:37 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:37 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:37 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:37 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:38 compute-2 ceph-mon[6044]: Deploying daemon haproxy.rgw.default.compute-0.kuntxb on compute-0
Oct 09 11:01:38 compute-2 ceph-mon[6044]: 10.17 deep-scrub starts
Oct 09 11:01:38 compute-2 ceph-mon[6044]: 10.17 deep-scrub ok
Oct 09 11:01:38 compute-2 ceph-mon[6044]: 8.15 scrub starts
Oct 09 11:01:38 compute-2 ceph-mon[6044]: 8.15 scrub ok
Oct 09 11:01:39 compute-2 sudo[16685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:01:39 compute-2 sudo[16685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:39 compute-2 sudo[16685]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:39 compute-2 sudo[16710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:01:39 compute-2 sudo[16710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:39 compute-2 ceph-mon[6044]: pgmap v49: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:01:39 compute-2 ceph-mon[6044]: 9.11 deep-scrub starts
Oct 09 11:01:39 compute-2 ceph-mon[6044]: 9.11 deep-scrub ok
Oct 09 11:01:39 compute-2 ceph-mon[6044]: 10.13 scrub starts
Oct 09 11:01:39 compute-2 ceph-mon[6044]: 10.13 scrub ok
Oct 09 11:01:39 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:39 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:39 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:39 compute-2 podman[16778]: 2025-10-09 11:01:39.746813244 +0000 UTC m=+0.044474922 container create 547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0 (image=quay.io/ceph/haproxy:2.3, name=pedantic_bartik)
Oct 09 11:01:39 compute-2 systemd[1]: Started libpod-conmon-547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0.scope.
Oct 09 11:01:39 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:01:39 compute-2 podman[16778]: 2025-10-09 11:01:39.816865017 +0000 UTC m=+0.114526725 container init 547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0 (image=quay.io/ceph/haproxy:2.3, name=pedantic_bartik)
Oct 09 11:01:39 compute-2 podman[16778]: 2025-10-09 11:01:39.727826693 +0000 UTC m=+0.025488421 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 11:01:39 compute-2 podman[16778]: 2025-10-09 11:01:39.822680487 +0000 UTC m=+0.120342175 container start 547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0 (image=quay.io/ceph/haproxy:2.3, name=pedantic_bartik)
Oct 09 11:01:39 compute-2 podman[16778]: 2025-10-09 11:01:39.825607513 +0000 UTC m=+0.123269211 container attach 547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0 (image=quay.io/ceph/haproxy:2.3, name=pedantic_bartik)
Oct 09 11:01:39 compute-2 pedantic_bartik[16794]: 0 0
Oct 09 11:01:39 compute-2 systemd[1]: libpod-547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0.scope: Deactivated successfully.
Oct 09 11:01:39 compute-2 conmon[16794]: conmon 547eb3db9cf626c30178 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0.scope/container/memory.events
Oct 09 11:01:39 compute-2 podman[16778]: 2025-10-09 11:01:39.828938799 +0000 UTC m=+0.126600487 container died 547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0 (image=quay.io/ceph/haproxy:2.3, name=pedantic_bartik)
Oct 09 11:01:39 compute-2 systemd[1]: var-lib-containers-storage-overlay-8aee8e21825539eb0adeb9654654af9c3499b0c4d100e0c163db3868408cd602-merged.mount: Deactivated successfully.
Oct 09 11:01:39 compute-2 podman[16778]: 2025-10-09 11:01:39.876172512 +0000 UTC m=+0.173834200 container remove 547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0 (image=quay.io/ceph/haproxy:2.3, name=pedantic_bartik)
Oct 09 11:01:39 compute-2 systemd[1]: libpod-conmon-547eb3db9cf626c30178723db25d2c78d7dd2041a5551d187a7ff93b298964c0.scope: Deactivated successfully.
Oct 09 11:01:39 compute-2 systemd[1]: Reloading.
Oct 09 11:01:40 compute-2 systemd-rc-local-generator[16843]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:40 compute-2 systemd-sysv-generator[16846]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:40 compute-2 systemd[1]: Reloading.
Oct 09 11:01:40 compute-2 systemd-sysv-generator[16884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:40 compute-2 systemd-rc-local-generator[16881]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:40 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:40 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.002000052s ======
Oct 09 11:01:40 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:40.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Oct 09 11:01:40 compute-2 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.zdhryc for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:01:40 compute-2 ceph-mon[6044]: Deploying daemon haproxy.rgw.default.compute-2.zdhryc on compute-2
Oct 09 11:01:40 compute-2 ceph-mon[6044]: pgmap v50: 353 pgs: 31 unknown, 32 peering, 290 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:01:40 compute-2 ceph-mon[6044]: 8.10 scrub starts
Oct 09 11:01:40 compute-2 ceph-mon[6044]: 8.10 scrub ok
Oct 09 11:01:40 compute-2 ceph-mon[6044]: 10.16 scrub starts
Oct 09 11:01:40 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:40 compute-2 podman[16939]: 2025-10-09 11:01:40.803090027 +0000 UTC m=+0.044587925 container create 5ae871c0e230cb45140dca1838abe0e13f3cf02b57130efa13b16bb888ff5e91 (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-2-zdhryc)
Oct 09 11:01:40 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79f7825dc6ed9b1ead0a753e994462dea7050eb4c54e1e83a58c11c263d85756/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:40 compute-2 podman[16939]: 2025-10-09 11:01:40.855789351 +0000 UTC m=+0.097287269 container init 5ae871c0e230cb45140dca1838abe0e13f3cf02b57130efa13b16bb888ff5e91 (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-2-zdhryc)
Oct 09 11:01:40 compute-2 podman[16939]: 2025-10-09 11:01:40.860064391 +0000 UTC m=+0.101562289 container start 5ae871c0e230cb45140dca1838abe0e13f3cf02b57130efa13b16bb888ff5e91 (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-2-zdhryc)
Oct 09 11:01:40 compute-2 bash[16939]: 5ae871c0e230cb45140dca1838abe0e13f3cf02b57130efa13b16bb888ff5e91
Oct 09 11:01:40 compute-2 podman[16939]: 2025-10-09 11:01:40.784441154 +0000 UTC m=+0.025939072 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 11:01:40 compute-2 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.zdhryc for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:01:40 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-rgw-default-compute-2-zdhryc[16954]: [NOTICE] 281/110140 (2) : New worker #1 (4) forked
Oct 09 11:01:40 compute-2 sudo[16710]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:41 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:42 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:42 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 09 11:01:42 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:42.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 09 11:01:42 compute-2 ceph-mon[6044]: 10.16 scrub ok
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:42 compute-2 ceph-mon[6044]: 9.16 scrub starts
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:42 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 11:01:42 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 11:01:42 compute-2 ceph-mon[6044]: Deploying daemon keepalived.rgw.default.compute-0.hpolom on compute-0
Oct 09 11:01:42 compute-2 ceph-mon[6044]: 9.16 scrub ok
Oct 09 11:01:42 compute-2 ceph-mon[6044]: 10.1 scrub starts
Oct 09 11:01:42 compute-2 ceph-mon[6044]: 10.1 scrub ok
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 09 11:01:42 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 11:01:42 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.15( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.16( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.17( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.16( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.13( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.11( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.2( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.3( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.f( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.a( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.a( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.9( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.e( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.d( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.c( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.8( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.b( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.3( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.6( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.5( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.13( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.4( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.1( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.7( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[11.19( empty local-lis/les=0/0 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.f( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.1f( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.11( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[8.1c( empty local-lis/les=0/0 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.9( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.3( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.4( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.3( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.1e( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.1a( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.18( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.1e( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.10( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.11( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[10.12( empty local-lis/les=0/0 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.1d( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.17( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 61 pg[12.2( empty local-lis/les=0/0 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:42 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:42 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:42 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:42.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.503789) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702504001, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6782, "num_deletes": 259, "total_data_size": 19027638, "memory_usage": 20028816, "flush_reason": "Manual Compaction"}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702594347, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12078713, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 6787, "table_properties": {"data_size": 12052912, "index_size": 16279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 82467, "raw_average_key_size": 24, "raw_value_size": 11988056, "raw_average_value_size": 3560, "num_data_blocks": 715, "num_entries": 3367, "num_filter_entries": 3367, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007551, "oldest_key_time": 1760007551, "file_creation_time": 1760007702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e92a8cb1-44df-49f9-9e99-9e69cedae100", "db_session_id": "6DJJ1ETB7PHV9MCMB6A8", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 90596 microseconds, and 46383 cpu microseconds.
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.594411) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12078713 bytes OK
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.594438) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.596291) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.596342) EVENT_LOG_v1 {"time_micros": 1760007702596334, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.596367) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 18991233, prev total WAL file size 18991233, number of live WAL files 2.
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.600286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702600420, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12080361, "oldest_snapshot_seqno": -1}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3111 keys, 12074818 bytes, temperature: kUnknown
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702681748, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12074818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12049681, "index_size": 16260, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 78910, "raw_average_key_size": 25, "raw_value_size": 11988114, "raw_average_value_size": 3853, "num_data_blocks": 714, "num_entries": 3111, "num_filter_entries": 3111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007550, "oldest_key_time": 0, "file_creation_time": 1760007702, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e92a8cb1-44df-49f9-9e99-9e69cedae100", "db_session_id": "6DJJ1ETB7PHV9MCMB6A8", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.682132) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12074818 bytes
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.684011) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.3 rd, 148.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.5, 0.0 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3372, records dropped: 261 output_compression: NoCompression
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.684041) EVENT_LOG_v1 {"time_micros": 1760007702684024, "job": 4, "event": "compaction_finished", "compaction_time_micros": 81460, "compaction_time_cpu_micros": 26111, "output_level": 6, "num_output_files": 1, "total_output_size": 12074818, "num_input_records": 3372, "num_output_records": 3111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702686721, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007702686780, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 09 11:01:42 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:01:42.600076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 11:01:43 compute-2 ceph-mon[6044]: pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:01:43 compute-2 ceph-mon[6044]: 9.10 scrub starts
Oct 09 11:01:43 compute-2 ceph-mon[6044]: 9.10 scrub ok
Oct 09 11:01:43 compute-2 ceph-mon[6044]: 10.c scrub starts
Oct 09 11:01:43 compute-2 ceph-mon[6044]: 10.c scrub ok
Oct 09 11:01:43 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 11:01:43 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 11:01:43 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 11:01:43 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 09 11:01:43 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 11:01:43 compute-2 ceph-mon[6044]: osdmap e61: 3 total, 3 up, 3 in
Oct 09 11:01:43 compute-2 ceph-mon[6044]: 12.15 deep-scrub starts
Oct 09 11:01:43 compute-2 ceph-mon[6044]: 12.15 deep-scrub ok
Oct 09 11:01:43 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.17( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.17( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.16( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.9( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.e( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.f( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.13( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.1( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.7( v 56'58 lc 52'32 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.10( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.11( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.11( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.a( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.a( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.12( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.13( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.3( v 35'6 (0'0,35'6] local-lis/les=61/62 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=61/62 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.4( v 56'58 (0'0,56'58] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.16( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.3( v 56'58 (0'0,56'58] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.3( v 60'57 lc 60'56 (0'0,60'57] local-lis/les=61/62 n=1 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=60'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.5( v 35'6 (0'0,35'6] local-lis/les=61/62 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.2( v 56'58 (0'0,56'58] local-lis/les=61/62 n=1 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.1e( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[11.19( empty local-lis/les=61/62 n=0 ec=57/40 lis/c=57/57 les/c/f=58/58/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.1d( v 56'58 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.4( v 39'48 (0'0,39'48] local-lis/les=61/62 n=1 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[10.1e( v 39'48 (0'0,39'48] local-lis/les=61/62 n=0 ec=57/38 lis/c=57/57 les/c/f=59/59/0 sis=61) [2] r=0 lpr=61 pi=[57,61)/1 crt=39'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.1a( v 60'65 lc 60'64 (0'0,60'65] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=60'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=61/62 n=0 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=61/62 n=1 ec=55/34 lis/c=55/55 les/c/f=56/56/0 sis=61) [2] r=0 lpr=61 pi=[55,61)/1 crt=35'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 62 pg[12.18( v 56'58 lc 52'18 (0'0,56'58] local-lis/les=61/62 n=0 ec=59/50 lis/c=59/59 les/c/f=60/60/0 sis=61) [2] r=0 lpr=61 pi=[59,61)/1 crt=56'58 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:43 compute-2 sudo[16971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:01:43 compute-2 sudo[16971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:43 compute-2 sudo[16971]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:43 compute-2 sudo[16996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:01:43 compute-2 sudo[16996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:44 compute-2 podman[17062]: 2025-10-09 11:01:44.061011582 +0000 UTC m=+0.051279287 container create 77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_merkle, vendor=Red Hat, Inc., io.buildah.version=1.28.2, description=keepalived for Ceph, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, release=1793, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9)
Oct 09 11:01:44 compute-2 systemd[1]: Started libpod-conmon-77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e.scope.
Oct 09 11:01:44 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:01:44 compute-2 podman[17062]: 2025-10-09 11:01:44.037383561 +0000 UTC m=+0.027651286 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 11:01:44 compute-2 podman[17062]: 2025-10-09 11:01:44.148139867 +0000 UTC m=+0.138407592 container init 77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_merkle, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, release=1793, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vcs-type=git, vendor=Red Hat, Inc.)
Oct 09 11:01:44 compute-2 podman[17062]: 2025-10-09 11:01:44.15596581 +0000 UTC m=+0.146233515 container start 77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_merkle, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, architecture=x86_64, description=keepalived for Ceph, vcs-type=git, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived)
Oct 09 11:01:44 compute-2 podman[17062]: 2025-10-09 11:01:44.160330552 +0000 UTC m=+0.150598277 container attach 77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_merkle, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, version=2.2.4, release=1793, architecture=x86_64, distribution-scope=public, name=keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 09 11:01:44 compute-2 inspiring_merkle[17078]: 0 0
Oct 09 11:01:44 compute-2 systemd[1]: libpod-77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e.scope: Deactivated successfully.
Oct 09 11:01:44 compute-2 conmon[17078]: conmon 77f2eecf9cb923c1e04e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e.scope/container/memory.events
Oct 09 11:01:44 compute-2 podman[17062]: 2025-10-09 11:01:44.163547936 +0000 UTC m=+0.153815641 container died 77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_merkle, name=keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.28.2, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Oct 09 11:01:44 compute-2 systemd[1]: var-lib-containers-storage-overlay-2909c78e26798817dd47c7167902b47d74eb0466316c15f7f611ae401499aae2-merged.mount: Deactivated successfully.
Oct 09 11:01:44 compute-2 podman[17062]: 2025-10-09 11:01:44.198537331 +0000 UTC m=+0.188805036 container remove 77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_merkle, io.buildah.version=1.28.2, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, distribution-scope=public, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Oct 09 11:01:44 compute-2 systemd[1]: libpod-conmon-77f2eecf9cb923c1e04edc47f3c3592d45bcb845955af3b75296d64c2fee698e.scope: Deactivated successfully.
Oct 09 11:01:44 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:44 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 09 11:01:44 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:44.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 09 11:01:44 compute-2 systemd[1]: Reloading.
Oct 09 11:01:44 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 09 11:01:44 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 09 11:01:44 compute-2 systemd-rc-local-generator[17120]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:44 compute-2 systemd-sysv-generator[17125]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:44 compute-2 ceph-mon[6044]: 11.15 scrub starts
Oct 09 11:01:44 compute-2 ceph-mon[6044]: 11.15 scrub ok
Oct 09 11:01:44 compute-2 ceph-mon[6044]: osdmap e62: 3 total, 3 up, 3 in
Oct 09 11:01:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:44 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 09 11:01:44 compute-2 ceph-mon[6044]: 10.0 scrub starts
Oct 09 11:01:44 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:44 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 09 11:01:44 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:44.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 09 11:01:44 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct 09 11:01:44 compute-2 systemd[1]: Reloading.
Oct 09 11:01:44 compute-2 systemd-rc-local-generator[17159]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 11:01:44 compute-2 systemd-sysv-generator[17165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 11:01:44 compute-2 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.txrqnp for e990987d-9393-5e96-99ae-9e3a3319f191...
Oct 09 11:01:45 compute-2 podman[17222]: 2025-10-09 11:01:45.022454862 +0000 UTC m=+0.041879955 container create da7fc65515056bb2e753f2e42d54805308614214eafde2b3c653d800f371c6aa (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, name=keepalived, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Oct 09 11:01:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/406066d91188d3c008e5ea4b0e1fc44c21992821237cee6cdb13b1bf6bbf6f57/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 11:01:45 compute-2 podman[17222]: 2025-10-09 11:01:45.07182654 +0000 UTC m=+0.091251653 container init da7fc65515056bb2e753f2e42d54805308614214eafde2b3c653d800f371c6aa (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=keepalived for Ceph, version=2.2.4, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 09 11:01:45 compute-2 podman[17222]: 2025-10-09 11:01:45.07684797 +0000 UTC m=+0.096273063 container start da7fc65515056bb2e753f2e42d54805308614214eafde2b3c653d800f371c6aa (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp, build-date=2023-02-22T09:23:20, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph)
Oct 09 11:01:45 compute-2 bash[17222]: da7fc65515056bb2e753f2e42d54805308614214eafde2b3c653d800f371c6aa
Oct 09 11:01:45 compute-2 podman[17222]: 2025-10-09 11:01:45.006312204 +0000 UTC m=+0.025737317 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 11:01:45 compute-2 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.txrqnp for e990987d-9393-5e96-99ae-9e3a3319f191.
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: Starting VRRP child process, pid=4
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: Startup complete
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: (VI_0) Entering BACKUP STATE (init)
Oct 09 11:01:45 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:45 2025: VRRP_Script(check_backend) succeeded
Oct 09 11:01:45 compute-2 sudo[16996]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:45 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.17 deep-scrub starts
Oct 09 11:01:45 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.17 deep-scrub ok
Oct 09 11:01:45 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 11:01:45 compute-2 ceph-mon[6044]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 11:01:45 compute-2 ceph-mon[6044]: Deploying daemon keepalived.rgw.default.compute-2.txrqnp on compute-2
Oct 09 11:01:45 compute-2 ceph-mon[6044]: pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:01:45 compute-2 ceph-mon[6044]: 10.0 scrub ok
Oct 09 11:01:45 compute-2 ceph-mon[6044]: 8.c scrub starts
Oct 09 11:01:45 compute-2 ceph-mon[6044]: 8.c scrub ok
Oct 09 11:01:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 09 11:01:45 compute-2 ceph-mon[6044]: osdmap e63: 3 total, 3 up, 3 in
Oct 09 11:01:45 compute-2 ceph-mon[6044]: 10.e deep-scrub starts
Oct 09 11:01:45 compute-2 ceph-mon[6044]: 10.e deep-scrub ok
Oct 09 11:01:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:45 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:46 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:46 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:46 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:46 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:46 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:46 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 09 11:01:46 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 09 11:01:46 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:46 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 11:01:46 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:46 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 11:01:46 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:46 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 09 11:01:46 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:46.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 09 11:01:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.3( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.b( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 64 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=64) [2] r=0 lpr=64 pi=[55,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:46 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:46 compute-2 ceph-mon[6044]: 12.a scrub starts
Oct 09 11:01:46 compute-2 ceph-mon[6044]: 12.a scrub ok
Oct 09 11:01:46 compute-2 ceph-mon[6044]: 11.17 deep-scrub starts
Oct 09 11:01:46 compute-2 ceph-mon[6044]: 11.17 deep-scrub ok
Oct 09 11:01:46 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:46 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:46 compute-2 ceph-mon[6044]: Deploying daemon prometheus.compute-0 on compute-0
Oct 09 11:01:46 compute-2 ceph-mon[6044]: pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:01:46 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 09 11:01:46 compute-2 ceph-mon[6044]: 10.a scrub starts
Oct 09 11:01:46 compute-2 ceph-mon[6044]: 10.a scrub ok
Oct 09 11:01:47 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:47 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:47 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:47 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:47 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Oct 09 11:01:47 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Oct 09 11:01:47 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.b( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.b( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.3( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.13( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.3( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:47 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[55,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:48 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:48 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:48 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:48 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 11.16 scrub starts
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 11.16 scrub ok
Oct 09 11:01:48 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 09 11:01:48 compute-2 ceph-mon[6044]: osdmap e64: 3 total, 3 up, 3 in
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 9.2 scrub starts
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 10.9 scrub starts
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 9.2 scrub ok
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 10.9 scrub ok
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 12.17 scrub starts
Oct 09 11:01:48 compute-2 ceph-mon[6044]: 12.17 scrub ok
Oct 09 11:01:48 compute-2 ceph-mon[6044]: osdmap e65: 3 total, 3 up, 3 in
Oct 09 11:01:48 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:48 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:48 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:48 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.9 scrub starts
Oct 09 11:01:48 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.9 scrub ok
Oct 09 11:01:48 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:48 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 09 11:01:48 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:48.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 09 11:01:49 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:49 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:49 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:49 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:49 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct 09 11:01:49 compute-2 ceph-mon[6044]: pgmap v59: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 47 B/s, 0 keys/s, 3 objects/s recovering
Oct 09 11:01:49 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 09 11:01:49 compute-2 ceph-mon[6044]: 9.e scrub starts
Oct 09 11:01:49 compute-2 ceph-mon[6044]: 12.f scrub starts
Oct 09 11:01:49 compute-2 ceph-mon[6044]: 9.e scrub ok
Oct 09 11:01:49 compute-2 ceph-mon[6044]: 12.f scrub ok
Oct 09 11:01:49 compute-2 ceph-mon[6044]: 12.9 scrub starts
Oct 09 11:01:49 compute-2 ceph-mon[6044]: 12.9 scrub ok
Oct 09 11:01:49 compute-2 ceph-mon[6044]: 10.d scrub starts
Oct 09 11:01:49 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 09 11:01:49 compute-2 ceph-mon[6044]: osdmap e66: 3 total, 3 up, 3 in
Oct 09 11:01:50 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:50 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:50 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:50 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:50 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.5( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 67 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:50 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:50 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:50 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:50.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:50 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:50 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 09 11:01:50 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:50.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 09 11:01:50 compute-2 ceph-mon[6044]: 9.9 scrub starts
Oct 09 11:01:50 compute-2 ceph-mon[6044]: 9.9 scrub ok
Oct 09 11:01:50 compute-2 ceph-mon[6044]: 10.d scrub ok
Oct 09 11:01:50 compute-2 ceph-mon[6044]: pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 keys/s, 2 objects/s recovering
Oct 09 11:01:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 09 11:01:50 compute-2 ceph-mon[6044]: 11.0 scrub starts
Oct 09 11:01:50 compute-2 ceph-mon[6044]: 11.0 scrub ok
Oct 09 11:01:50 compute-2 ceph-mon[6044]: 12.5 scrub starts
Oct 09 11:01:50 compute-2 ceph-mon[6044]: 12.5 scrub ok
Oct 09 11:01:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 09 11:01:50 compute-2 ceph-mon[6044]: osdmap e67: 3 total, 3 up, 3 in
Oct 09 11:01:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:50 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:51 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:51 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:51 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:51 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.15( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.d( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.5( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.5( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.1d( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[55,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.f( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.17( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.b( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.3( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.7( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=6 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.1b( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.13( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 68 pg[9.1f( v 42'1020 (0'0,42'1020] local-lis/les=67/68 n=5 ec=55/36 lis/c=65/55 les/c/f=66/56/0 sis=67) [2] r=0 lpr=67 pi=[55,67)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:51 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 09 11:01:51 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 09 11:01:51 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  1: '-n'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  2: 'mgr.compute-2.agiurv'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  3: '-f'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  4: '--setuser'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  5: 'ceph'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  6: '--setgroup'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  7: 'ceph'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  8: '--default-log-to-file=false'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  9: '--default-log-to-journald=true'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 09 11:01:51 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setuser ceph since I am not root
Oct 09 11:01:51 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: ignoring --setgroup ceph since I am not root
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: pidfile_write: ignore empty --pid-file
Oct 09 11:01:51 compute-2 sshd-session[13585]: Connection closed by 192.168.122.100 port 56888
Oct 09 11:01:51 compute-2 sshd-session[13566]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 11:01:51 compute-2 systemd[1]: session-18.scope: Deactivated successfully.
Oct 09 11:01:51 compute-2 systemd[1]: session-18.scope: Consumed 18.180s CPU time.
Oct 09 11:01:51 compute-2 systemd-logind[844]: Session 18 logged out. Waiting for processes to exit.
Oct 09 11:01:51 compute-2 systemd-logind[844]: Removed session 18.
Oct 09 11:01:51 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'alerts'
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:52.023+0000 7fa7a43ce140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 11:01:52 compute-2 ceph-mgr[6348]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 11:01:52 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'balancer'
Oct 09 11:01:52 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:52 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 09 11:01:52 compute-2 ceph-mon[6044]: 11.c scrub starts
Oct 09 11:01:52 compute-2 ceph-mon[6044]: 11.c scrub ok
Oct 09 11:01:52 compute-2 ceph-mon[6044]: 12.d scrub starts
Oct 09 11:01:52 compute-2 ceph-mon[6044]: 12.d scrub ok
Oct 09 11:01:52 compute-2 ceph-mon[6044]: osdmap e68: 3 total, 3 up, 3 in
Oct 09 11:01:52 compute-2 ceph-mon[6044]: 9.f scrub starts
Oct 09 11:01:52 compute-2 ceph-mon[6044]: 9.f scrub ok
Oct 09 11:01:52 compute-2 ceph-mon[6044]: from='mgr.14481 192.168.122.100:0/2158475446' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 09 11:01:52 compute-2 ceph-mon[6044]: mgrmap e27: compute-0.izrudc(active, since 82s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:52 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:52 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:52.107+0000 7fa7a43ce140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 11:01:52 compute-2 ceph-mgr[6348]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 11:01:52 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'cephadm'
Oct 09 11:01:52 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:52 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:52 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:52.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:52 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 11:01:52 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:52 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:52 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:52.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:52 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct 09 11:01:52 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 70 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=4 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:52 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 70 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=4 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:52 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 70 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:52 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 70 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:52 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 70 pg[9.5( v 69'1025 (0'0,69'1025] local-lis/les=0/0 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 luod=0'0 crt=59'1023 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:52 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 70 pg[9.5( v 69'1025 (0'0,69'1025] local-lis/les=0/0 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=59'1023 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:52 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'crash'
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:52.891+0000 7fa7a43ce140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 11:01:52 compute-2 ceph-mgr[6348]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 11:01:52 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'dashboard'
Oct 09 11:01:52 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:52 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:53 compute-2 ceph-mon[6044]: 11.b scrub starts
Oct 09 11:01:53 compute-2 ceph-mon[6044]: 11.b scrub ok
Oct 09 11:01:53 compute-2 ceph-mon[6044]: 10.b scrub starts
Oct 09 11:01:53 compute-2 ceph-mon[6044]: 10.b scrub ok
Oct 09 11:01:53 compute-2 ceph-mon[6044]: osdmap e69: 3 total, 3 up, 3 in
Oct 09 11:01:53 compute-2 ceph-mon[6044]: osdmap e70: 3 total, 3 up, 3 in
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:53 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:53 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'devicehealth'
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:53.517+0000 7fa7a43ce140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 11:01:53 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct 09 11:01:53 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 71 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=71) [2] r=0 lpr=71 pi=[55,71)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:01:53 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 71 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=71) [2] r=0 lpr=71 pi=[55,71)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:01:53 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 71 pg[9.15( v 42'1020 (0'0,42'1020] local-lis/les=70/71 n=4 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:53 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 71 pg[9.5( v 69'1025 (0'0,69'1025] local-lis/les=70/71 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=69'1025 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:53 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 71 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=70/71 n=5 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=70) [2] r=0 lpr=70 pi=[55,70)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]:   from numpy import show_config as show_numpy_config
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:53.691+0000 7fa7a43ce140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'influx'
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:53.764+0000 7fa7a43ce140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'insights'
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'iostat'
Oct 09 11:01:53 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:53.908+0000 7fa7a43ce140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 11:01:53 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'k8sevents'
Oct 09 11:01:54 compute-2 ceph-mon[6044]: 11.9 scrub starts
Oct 09 11:01:54 compute-2 ceph-mon[6044]: 11.9 scrub ok
Oct 09 11:01:54 compute-2 ceph-mon[6044]: 12.0 scrub starts
Oct 09 11:01:54 compute-2 ceph-mon[6044]: 12.0 scrub ok
Oct 09 11:01:54 compute-2 ceph-mon[6044]: osdmap e71: 3 total, 3 up, 3 in
Oct 09 11:01:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:54 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:54 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:54 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce0001ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:54 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:54 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:54 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:54.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:54 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:54 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 09 11:01:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'localpool'
Oct 09 11:01:54 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 09 11:01:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 11:01:54 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:54 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:54 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:54.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'mirroring'
Oct 09 11:01:54 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct 09 11:01:54 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 72 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=71/72 n=6 ec=55/36 lis/c=68/55 les/c/f=69/56/0 sis=71) [2] r=0 lpr=71 pi=[55,71)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:01:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'nfs'
Oct 09 11:01:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:54.914+0000 7fa7a43ce140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 11:01:54 compute-2 ceph-mgr[6348]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 11:01:54 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'orchestrator'
Oct 09 11:01:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl[16151]: [WARNING] 281/110154 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 11:01:54 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:54 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:55 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:55 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:55 compute-2 ceph-mon[6044]: 11.d scrub starts
Oct 09 11:01:55 compute-2 ceph-mon[6044]: 11.d scrub ok
Oct 09 11:01:55 compute-2 ceph-mon[6044]: 10.6 scrub starts
Oct 09 11:01:55 compute-2 ceph-mon[6044]: 10.6 scrub ok
Oct 09 11:01:55 compute-2 ceph-mon[6044]: 9.5 scrub starts
Oct 09 11:01:55 compute-2 ceph-mon[6044]: 9.5 scrub ok
Oct 09 11:01:55 compute-2 ceph-mon[6044]: osdmap e72: 3 total, 3 up, 3 in
Oct 09 11:01:55 compute-2 ceph-mon[6044]: 12.1f scrub starts
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:55.147+0000 7fa7a43ce140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:55.223+0000 7fa7a43ce140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'osd_support'
Oct 09 11:01:55 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 09 11:01:55 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:55.294+0000 7fa7a43ce140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:55.375+0000 7fa7a43ce140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'progress'
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:55.452+0000 7fa7a43ce140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'prometheus'
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:55.829+0000 7fa7a43ce140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rbd_support'
Oct 09 11:01:55 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:55.933+0000 7fa7a43ce140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 11:01:55 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'restful'
Oct 09 11:01:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:56 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:56 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:56 compute-2 ceph-mon[6044]: 8.e scrub starts
Oct 09 11:01:56 compute-2 ceph-mon[6044]: 8.e scrub ok
Oct 09 11:01:56 compute-2 ceph-mon[6044]: 12.1f scrub ok
Oct 09 11:01:56 compute-2 ceph-mon[6044]: 10.f scrub starts
Oct 09 11:01:56 compute-2 ceph-mon[6044]: 10.f scrub ok
Oct 09 11:01:56 compute-2 ceph-mon[6044]: 10.1c scrub starts
Oct 09 11:01:56 compute-2 ceph-mon[6044]: 10.1c scrub ok
Oct 09 11:01:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:56 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rgw'
Oct 09 11:01:56 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:56 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:56 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:56.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:56 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:56 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct 09 11:01:56 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct 09 11:01:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:56.387+0000 7fa7a43ce140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 11:01:56 compute-2 ceph-mgr[6348]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 11:01:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'rook'
Oct 09 11:01:56 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:56 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:01:56 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:56.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:01:56 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:01:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:56 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:56 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:56.988+0000 7fa7a43ce140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 11:01:56 compute-2 ceph-mgr[6348]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 11:01:56 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'selftest'
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:57 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:57 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:57.068+0000 7fa7a43ce140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'snap_schedule'
Oct 09 11:01:57 compute-2 ceph-mon[6044]: 11.2 scrub starts
Oct 09 11:01:57 compute-2 ceph-mon[6044]: 11.2 scrub ok
Oct 09 11:01:57 compute-2 ceph-mon[6044]: 8.d scrub starts
Oct 09 11:01:57 compute-2 ceph-mon[6044]: 8.d scrub ok
Oct 09 11:01:57 compute-2 ceph-mon[6044]: 10.1a scrub starts
Oct 09 11:01:57 compute-2 ceph-mon[6044]: 10.1a scrub ok
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:57.151+0000 7fa7a43ce140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'stats'
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'status'
Oct 09 11:01:57 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Oct 09 11:01:57 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:57.301+0000 7fa7a43ce140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telegraf'
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:57.373+0000 7fa7a43ce140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'telemetry'
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:57.544+0000 7fa7a43ce140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 11:01:57 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:57.792+0000 7fa7a43ce140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 11:01:57 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'volumes'
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:58 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:58 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:58.114+0000 7fa7a43ce140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: mgr[py] Loading python module 'zabbix'
Oct 09 11:01:58 compute-2 ceph-mon[6044]: 8.1 scrub starts
Oct 09 11:01:58 compute-2 ceph-mon[6044]: 8.1 scrub ok
Oct 09 11:01:58 compute-2 ceph-mon[6044]: 11.13 deep-scrub starts
Oct 09 11:01:58 compute-2 ceph-mon[6044]: 11.13 deep-scrub ok
Oct 09 11:01:58 compute-2 ceph-mon[6044]: 12.1b scrub starts
Oct 09 11:01:58 compute-2 ceph-mon[6044]: 12.1b scrub ok
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:58 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce40021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 2025-10-09T11:01:58.192+0000 7fa7a43ce140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: mgr load Constructed class from module: dashboard
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: ms_deliver_dispatch: unhandled message 0x55ecbc301860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: mgr load Constructed class from module: prometheus
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [dashboard INFO root] Starting engine...
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [prometheus INFO root] Starting engine...
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: [09/Oct/2025:11:01:58] ENGINE Bus STARTING
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:01:58] ENGINE Bus STARTING
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: CherryPy Checker:
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: The Application mounted at '' has an empty config.
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: 
Oct 09 11:01:58 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:58 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:01:58 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:01:58.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:58 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:58 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 09 11:01:58 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [dashboard INFO root] Engine started...
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: [09/Oct/2025:11:01:58] ENGINE Serving on http://:::9283
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:01:58] ENGINE Serving on http://:::9283
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mgr-compute-2-agiurv[6344]: [09/Oct/2025:11:01:58] ENGINE Bus STARTED
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [prometheus INFO cherrypy.error] [09/Oct/2025:11:01:58] ENGINE Bus STARTED
Oct 09 11:01:58 compute-2 ceph-mgr[6348]: [prometheus INFO root] Engine started.
Oct 09 11:01:58 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct 09 11:01:58 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:01:58 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:01:58 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:01:58.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:01:58 compute-2 sshd-session[17327]: Accepted publickey for ceph-admin from 192.168.122.100 port 38326 ssh2: RSA SHA256:P/OphJWo0F+YDAhotp7+aZY/oZM2SlqCrGTOCv2h5KY
Oct 09 11:01:58 compute-2 systemd-logind[844]: New session 20 of user ceph-admin.
Oct 09 11:01:58 compute-2 systemd[1]: Started Session 20 of User ceph-admin.
Oct 09 11:01:58 compute-2 sshd-session[17327]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 11:01:58 compute-2 sudo[17331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:01:58 compute-2 sudo[17331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:58 compute-2 sudo[17331]: pam_unix(sudo:session): session closed for user root
Oct 09 11:01:58 compute-2 sudo[17357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 11:01:58 compute-2 sudo[17357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:01:58 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:01:58 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:01:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:01:59 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:59 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:01:59 2025: (VI_0) received an invalid passwd!
Oct 09 11:01:59 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.11 deep-scrub starts
Oct 09 11:01:59 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.11 deep-scrub ok
Oct 09 11:01:59 compute-2 ceph-mon[6044]: 8.0 scrub starts
Oct 09 11:01:59 compute-2 ceph-mon[6044]: 8.0 scrub ok
Oct 09 11:01:59 compute-2 ceph-mon[6044]: Active manager daemon compute-0.izrudc restarted
Oct 09 11:01:59 compute-2 ceph-mon[6044]: Activating manager daemon compute-0.izrudc
Oct 09 11:01:59 compute-2 ceph-mon[6044]: 10.10 scrub starts
Oct 09 11:01:59 compute-2 ceph-mon[6044]: 10.10 scrub ok
Oct 09 11:01:59 compute-2 ceph-mon[6044]: osdmap e73: 3 total, 3 up, 3 in
Oct 09 11:01:59 compute-2 ceph-mon[6044]: mgrmap e28: compute-0.izrudc(active, starting, since 0.16797s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.aesial"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.yzkqil"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.brbiqj"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.izrudc", "id": "compute-0.izrudc"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rtiqvm", "id": "compute-1.rtiqvm"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.agiurv", "id": "compute-2.agiurv"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: Manager daemon compute-0.izrudc is now available
Oct 09 11:01:59 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv restarted
Oct 09 11:01:59 compute-2 ceph-mon[6044]: Standby manager daemon compute-2.agiurv started
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/mirror_snapshot_schedule"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.izrudc/trash_purge_schedule"}]: dispatch
Oct 09 11:01:59 compute-2 ceph-mon[6044]: 10.1d scrub starts
Oct 09 11:01:59 compute-2 ceph-mon[6044]: 10.1d scrub ok
Oct 09 11:01:59 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm restarted
Oct 09 11:01:59 compute-2 ceph-mon[6044]: Standby manager daemon compute-1.rtiqvm started
Oct 09 11:01:59 compute-2 podman[17456]: 2025-10-09 11:01:59.570376307 +0000 UTC m=+0.078182988 container exec 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 11:01:59 compute-2 podman[17456]: 2025-10-09 11:01:59.688219625 +0000 UTC m=+0.196026286 container exec_died 01e731ed6921731ba9223e1da1cf5dc3266ebf5c5cc235dbb9fb4f5c01f74520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-mon-compute-2, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 11:02:00 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:00 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:00 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:00 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:00 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:00 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc00016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:00 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:00 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:00 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:00.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:00 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:00 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8001c00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:00 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct 09 11:02:00 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct 09 11:02:00 compute-2 podman[17592]: 2025-10-09 11:02:00.269983688 +0000 UTC m=+0.072370820 container exec 8d9bdd55b066c153b48497121c742a823a2cce56d0fdf8762dce99797205c9bc (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 11:02:00 compute-2 podman[17592]: 2025-10-09 11:02:00.307521147 +0000 UTC m=+0.109908259 container exec_died 8d9bdd55b066c153b48497121c742a823a2cce56d0fdf8762dce99797205c9bc (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 11:02:00 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:00 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:00 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:00.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:00 compute-2 ceph-mon[6044]: 8.7 deep-scrub starts
Oct 09 11:02:00 compute-2 ceph-mon[6044]: 8.7 deep-scrub ok
Oct 09 11:02:00 compute-2 ceph-mon[6044]: 12.11 deep-scrub starts
Oct 09 11:02:00 compute-2 ceph-mon[6044]: 12.11 deep-scrub ok
Oct 09 11:02:00 compute-2 ceph-mon[6044]: mgrmap e29: compute-0.izrudc(active, since 1.42084s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:02:00 compute-2 ceph-mon[6044]: [09/Oct/2025:11:01:59] ENGINE Bus STARTING
Oct 09 11:02:00 compute-2 ceph-mon[6044]: [09/Oct/2025:11:01:59] ENGINE Serving on https://192.168.122.100:7150
Oct 09 11:02:00 compute-2 ceph-mon[6044]: [09/Oct/2025:11:01:59] ENGINE Client ('192.168.122.100', 42036) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 11:02:00 compute-2 ceph-mon[6044]: [09/Oct/2025:11:01:59] ENGINE Serving on http://192.168.122.100:8765
Oct 09 11:02:00 compute-2 ceph-mon[6044]: [09/Oct/2025:11:01:59] ENGINE Bus STARTED
Oct 09 11:02:00 compute-2 ceph-mon[6044]: 10.1f scrub starts
Oct 09 11:02:00 compute-2 ceph-mon[6044]: 10.1f scrub ok
Oct 09 11:02:00 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 09 11:02:00 compute-2 podman[17665]: 2025-10-09 11:02:00.561124092 +0000 UTC m=+0.048747214 container exec d4cc332abe71f9af4b878796d80d8f9b834be70c9e29ff5af33b17e3e230f3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 11:02:00 compute-2 podman[17665]: 2025-10-09 11:02:00.574443789 +0000 UTC m=+0.062066911 container exec_died d4cc332abe71f9af4b878796d80d8f9b834be70c9e29ff5af33b17e3e230f3c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 11:02:00 compute-2 podman[17728]: 2025-10-09 11:02:00.768655461 +0000 UTC m=+0.051463424 container exec f5196a93bddafce6b5ffdebf0b48b62f48b665aea938bf6384b229ceeb5d071c (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl)
Oct 09 11:02:00 compute-2 podman[17728]: 2025-10-09 11:02:00.778492587 +0000 UTC m=+0.061300359 container exec_died f5196a93bddafce6b5ffdebf0b48b62f48b665aea938bf6384b229ceeb5d071c (image=quay.io/ceph/haproxy:2.3, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-haproxy-nfs-cephfs-compute-2-xqfbnl)
Oct 09 11:02:00 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct 09 11:02:00 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:00 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce40021f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:00 compute-2 podman[17792]: 2025-10-09 11:02:00.974934414 +0000 UTC m=+0.047713095 container exec 9aa2875443b46c413f7626fcce46cb8de75b717a4e55ac6da1ab94878ff712a2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo, version=2.2.4, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Oct 09 11:02:01 compute-2 podman[17812]: 2025-10-09 11:02:01.04236646 +0000 UTC m=+0.049108975 container exec_died 9aa2875443b46c413f7626fcce46cb8de75b717a4e55ac6da1ab94878ff712a2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, version=2.2.4, description=keepalived for Ceph, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 09 11:02:01 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:01 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:01 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:01 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:01 compute-2 podman[17792]: 2025-10-09 11:02:01.067141129 +0000 UTC m=+0.139919790 container exec_died 9aa2875443b46c413f7626fcce46cb8de75b717a4e55ac6da1ab94878ff712a2 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 11:02:01 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 09 11:02:01 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 09 11:02:01 compute-2 sudo[17357]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:01 compute-2 sudo[17862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:02:01 compute-2 sudo[17862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:01 compute-2 sudo[17862]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:01 compute-2 sudo[17887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 11:02:01 compute-2 sudo[17887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 11.6 scrub starts
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 11.6 scrub ok
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 8.9 scrub starts
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 8.9 scrub ok
Oct 09 11:02:01 compute-2 ceph-mon[6044]: pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 12.16 scrub starts
Oct 09 11:02:01 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 09 11:02:01 compute-2 ceph-mon[6044]: osdmap e74: 3 total, 3 up, 3 in
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 12.16 scrub ok
Oct 09 11:02:01 compute-2 ceph-mon[6044]: mgrmap e30: compute-0.izrudc(active, since 2s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 11.18 scrub starts
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 11.18 scrub ok
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 11.e scrub starts
Oct 09 11:02:01 compute-2 ceph-mon[6044]: 11.e scrub ok
Oct 09 11:02:01 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:01 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:01 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:01 compute-2 sudo[17887]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:01 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct 09 11:02:01 compute-2 sudo[17943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:02:01 compute-2 sudo[17943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:01 compute-2 sudo[17943]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:02 compute-2 sudo[17968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 11:02:02 compute-2 sudo[17968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:02 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:02 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:02 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:02 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:02 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:02 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:02 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:02 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:02 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:02.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:02 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:02 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc00016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:02 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 09 11:02:02 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 09 11:02:02 compute-2 sudo[17968]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:02 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:02 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:02 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:02.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:02 compute-2 ceph-mon[6044]: 12.14 scrub starts
Oct 09 11:02:02 compute-2 ceph-mon[6044]: 12.14 scrub ok
Oct 09 11:02:02 compute-2 ceph-mon[6044]: osdmap e75: 3 total, 3 up, 3 in
Oct 09 11:02:02 compute-2 ceph-mon[6044]: 8.1a scrub starts
Oct 09 11:02:02 compute-2 ceph-mon[6044]: 8.1a scrub ok
Oct 09 11:02:02 compute-2 ceph-mon[6044]: 10.11 scrub starts
Oct 09 11:02:02 compute-2 ceph-mon[6044]: 10.11 scrub ok
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:02 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 11:02:02 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:02 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:02 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct 09 11:02:03 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:03 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:03 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:03 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:03 compute-2 sudo[18012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 11:02:03 compute-2 sudo[18012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18012]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 11:02:03 compute-2 sudo[18038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18038]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:02:03 compute-2 sudo[18063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18063]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:02:03 compute-2 sudo[18088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18088]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct 09 11:02:03 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct 09 11:02:03 compute-2 sudo[18113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:02:03 compute-2 sudo[18113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18113]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:02:03 compute-2 sudo[18161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18161]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new
Oct 09 11:02:03 compute-2 sudo[18186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18186]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 11:02:03 compute-2 sudo[18211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18211]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:02:03 compute-2 sudo[18236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18236]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:02:03 compute-2 sudo[18261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18261]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:02:03 compute-2 sudo[18286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18286]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:02:03 compute-2 sudo[18311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18311]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:03 compute-2 sudo[18336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:02:03 compute-2 sudo[18336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:03 compute-2 sudo[18336]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:02:04 compute-2 sudo[18384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18384]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:04 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:04 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:04 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:04 compute-2 sudo[18409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new
Oct 09 11:02:04 compute-2 sudo[18409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18409]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:04 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce40091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:04 compute-2 sudo[18434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:02:04 compute-2 sudo[18434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18434]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 11:02:04 compute-2 sudo[18459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18459]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:04 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:04 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:04.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:04 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:04 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:04 compute-2 sudo[18484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph
Oct 09 11:02:04 compute-2 sudo[18484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18484]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:02:04 compute-2 ceph-mon[6044]: pgmap v7: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:04 compute-2 ceph-mon[6044]: 10.7 scrub starts
Oct 09 11:02:04 compute-2 ceph-mon[6044]: 10.7 scrub ok
Oct 09 11:02:04 compute-2 ceph-mon[6044]: 8.1e scrub starts
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:04 compute-2 ceph-mon[6044]: 8.1e scrub ok
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:04 compute-2 ceph-mon[6044]: mgrmap e31: compute-0.izrudc(active, since 4s), standbys: compute-1.rtiqvm, compute-2.agiurv
Oct 09 11:02:04 compute-2 ceph-mon[6044]: osdmap e76: 3 total, 3 up, 3 in
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:04 compute-2 sudo[18509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 11:02:04 compute-2 ceph-mon[6044]: 8.a scrub starts
Oct 09 11:02:04 compute-2 ceph-mon[6044]: 8.a scrub ok
Oct 09 11:02:04 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 09 11:02:04 compute-2 sudo[18509]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 09 11:02:04 compute-2 sudo[18534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:02:04 compute-2 sudo[18534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18534]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:02:04 compute-2 sudo[18559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18559]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:04 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:04 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:04.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:04 compute-2 sudo[18607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:02:04 compute-2 sudo[18607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18607]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new
Oct 09 11:02:04 compute-2 sudo[18632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18632]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 11:02:04 compute-2 sudo[18657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18657]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:02:04 compute-2 sudo[18682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18682]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config
Oct 09 11:02:04 compute-2 sudo[18707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18707]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:02:04 compute-2 sudo[18732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18732]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:02:04 compute-2 sudo[18757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18757]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 sudo[18783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:02:04 compute-2 sudo[18783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:04 compute-2 sudo[18783]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:04 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:04 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:05 compute-2 sudo[18831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:02:05 compute-2 sudo[18831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:05 compute-2 sudo[18831]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:05 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:05 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:05 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:05 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:05 compute-2 sudo[18856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new
Oct 09 11:02:05 compute-2 sudo[18856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:05 compute-2 sudo[18856]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:05 compute-2 sudo[18881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-e990987d-9393-5e96-99ae-9e3a3319f191/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring.new /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:02:05 compute-2 sudo[18881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:05 compute-2 sudo[18881]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:05 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct 09 11:02:05 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 09 11:02:05 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 09 11:02:06 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:06 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:06 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:06 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:06 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:06 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.conf
Oct 09 11:02:06 compute-2 ceph-mon[6044]: 12.1 scrub starts
Oct 09 11:02:06 compute-2 ceph-mon[6044]: 12.1 scrub ok
Oct 09 11:02:06 compute-2 ceph-mon[6044]: 8.1d scrub starts
Oct 09 11:02:06 compute-2 ceph-mon[6044]: 8.1d scrub ok
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 11:02:06 compute-2 ceph-mon[6044]: pgmap v9: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:06 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-0:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-2:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:02:06 compute-2 ceph-mon[6044]: Updating compute-1:/var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/config/ceph.client.admin.keyring
Oct 09 11:02:06 compute-2 ceph-mon[6044]: 11.1b scrub starts
Oct 09 11:02:06 compute-2 ceph-mon[6044]: 11.1b scrub ok
Oct 09 11:02:06 compute-2 ceph-mon[6044]: osdmap e77: 3 total, 3 up, 3 in
Oct 09 11:02:06 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:06 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:06 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:06.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:06 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:06 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce40091b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:06 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.13 scrub starts
Oct 09 11:02:06 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.13 scrub ok
Oct 09 11:02:06 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:06 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:06 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:06.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:06 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:06 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:06 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:07 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:07 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:07 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:07 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:07 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct 09 11:02:07 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=78) [2] r=0 lpr=78 pi=[55,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:07 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=78) [2] r=0 lpr=78 pi=[55,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:07 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 09 11:02:07 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 11.a scrub starts
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 11.a scrub ok
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 11.1f deep-scrub starts
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 11.1f deep-scrub ok
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 8.3 scrub starts
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 8.3 scrub ok
Oct 09 11:02:07 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 8.4 scrub starts
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 8.4 scrub ok
Oct 09 11:02:07 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 11.10 scrub starts
Oct 09 11:02:07 compute-2 ceph-mon[6044]: 11.10 scrub ok
Oct 09 11:02:07 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 09 11:02:07 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:08 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:08 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:08 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:08 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:08 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:08 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:08 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:08 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:08 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:08.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:08 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:08 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:08 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 09 11:02:08 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:08 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.002000059s ======
Oct 09 11:02:08 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:08.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000059s
Oct 09 11:02:08 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 09 11:02:08 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:08 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4009ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:09 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:09 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:09 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:09 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:09 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.3 deep-scrub starts
Oct 09 11:02:09 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.3 deep-scrub ok
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:09.810893) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729810970, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1186, "num_deletes": 251, "total_data_size": 4996454, "memory_usage": 5366192, "flush_reason": "Manual Compaction"}
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct 09 11:02:09 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct 09 11:02:09 compute-2 ceph-mon[6044]: pgmap v11: 353 pgs: 4 active+remapped, 349 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 9 op/s; 40 B/s, 3 objects/s recovering
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 12.13 scrub starts
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 12.13 scrub ok
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 11.1a deep-scrub starts
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 11.1a deep-scrub ok
Oct 09 11:02:09 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 09 11:02:09 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:09 compute-2 ceph-mon[6044]: osdmap e78: 3 total, 3 up, 3 in
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 8.13 scrub starts
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 8.13 scrub ok
Oct 09 11:02:09 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 11.7 scrub starts
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 11.7 scrub ok
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 11.11 scrub starts
Oct 09 11:02:09 compute-2 ceph-mon[6044]: 11.11 scrub ok
Oct 09 11:02:09 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007729904151, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3180379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6792, "largest_seqno": 7973, "table_properties": {"data_size": 3174489, "index_size": 2961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 16143, "raw_average_key_size": 21, "raw_value_size": 3161283, "raw_average_value_size": 4301, "num_data_blocks": 130, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007703, "oldest_key_time": 1760007703, "file_creation_time": 1760007729, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e92a8cb1-44df-49f9-9e99-9e69cedae100", "db_session_id": "6DJJ1ETB7PHV9MCMB6A8", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 93313 microseconds, and 8652 cpu microseconds.
Oct 09 11:02:09 compute-2 ceph-mon[6044]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 11:02:10 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:10 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:10 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:10 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:09.904219) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3180379 bytes OK
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:09.904237) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:10.065514) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:10.065562) EVENT_LOG_v1 {"time_micros": 1760007730065553, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:10.065585) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4989857, prev total WAL file size 4989898, number of live WAL files 2.
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:10.067254) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3105KB)], [15(11MB)]
Oct 09 11:02:10 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007730067289, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 15255197, "oldest_snapshot_seqno": -1}
Oct 09 11:02:10 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:10 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:10 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:10 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:10 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:10.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:10 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:10 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:10 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Oct 09 11:02:10 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:10 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:10 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:10.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:10 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:10 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:11 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:11 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:11 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:11 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:11 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Oct 09 11:02:11 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Oct 09 11:02:11 compute-2 ceph-mon[6044]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3314 keys, 13913274 bytes, temperature: kUnknown
Oct 09 11:02:11 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007731788667, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13913274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13886787, "index_size": 17097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 85670, "raw_average_key_size": 25, "raw_value_size": 13821381, "raw_average_value_size": 4170, "num_data_blocks": 745, "num_entries": 3314, "num_filter_entries": 3314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760007550, "oldest_key_time": 0, "file_creation_time": 1760007730, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e92a8cb1-44df-49f9-9e99-9e69cedae100", "db_session_id": "6DJJ1ETB7PHV9MCMB6A8", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct 09 11:02:11 compute-2 ceph-mon[6044]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:11.788896) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13913274 bytes
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:12.003937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 8.9 rd, 8.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 11.5 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(9.2) write-amplify(4.4) OK, records in: 3846, records dropped: 532 output_compression: NoCompression
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:12.003975) EVENT_LOG_v1 {"time_micros": 1760007732003962, "job": 6, "event": "compaction_finished", "compaction_time_micros": 1721452, "compaction_time_cpu_micros": 32384, "output_level": 6, "num_output_files": 1, "total_output_size": 13913274, "num_input_records": 3846, "num_output_records": 3314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007732004741, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760007732006892, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:10.067174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:12.006970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:12.006976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:12.006978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:12.006980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 11:02:12 compute-2 ceph-mon[6044]: rocksdb: (Original Log Time 2025/10/09-11:02:12.006981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 11:02:12 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Oct 09 11:02:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:12 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:12 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:12 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:12 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4009ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:12 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:12 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:12 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:12 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:12 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 09 11:02:12 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct 09 11:02:12 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 8.11 scrub starts
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 8.11 scrub ok
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 8.2 scrub starts
Oct 09 11:02:12 compute-2 ceph-mon[6044]: pgmap v13: 353 pgs: 4 active+remapped, 349 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 8 op/s; 36 B/s, 3 objects/s recovering
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 11.5 deep-scrub starts
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 11.5 deep-scrub ok
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 12.b scrub starts
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 12.b scrub ok
Oct 09 11:02:12 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:12 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 09 11:02:12 compute-2 ceph-mon[6044]: osdmap e79: 3 total, 3 up, 3 in
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 8.1b deep-scrub starts
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 8.1b deep-scrub ok
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 12.c scrub starts
Oct 09 11:02:12 compute-2 ceph-mon[6044]: 12.c scrub ok
Oct 09 11:02:12 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:12 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 09 11:02:12 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:12.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 09 11:02:12 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 80 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:12 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:12 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:12 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 80 pg[9.9( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2] r=0 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:12 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:12 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[55,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:12 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:12 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:13 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:13 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:13 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:13 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:13 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 09 11:02:13 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 8.2 scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 11.3 deep-scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 11.3 deep-scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 10.12 scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: pgmap v15: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 8.8 scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 8.8 scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 12.8 scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 12.8 scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 10.12 scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 12.4 scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 11.f scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 11.f scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 10.8 scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 10.8 scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 09 11:02:13 compute-2 ceph-mon[6044]: osdmap e80: 3 total, 3 up, 3 in
Oct 09 11:02:13 compute-2 ceph-mon[6044]: pgmap v17: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 11.12 scrub starts
Oct 09 11:02:13 compute-2 ceph-mon[6044]: 11.12 scrub ok
Oct 09 11:02:13 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:13 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 11:02:13 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 11:02:13 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:14 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:14 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:14 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:14 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.3 scrub starts
Oct 09 11:02:14 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:14 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:14 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:14 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct 09 11:02:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:14 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4009ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:14 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.3 scrub ok
Oct 09 11:02:14 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:14 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000030s ======
Oct 09 11:02:14 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:14.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 12.4 scrub ok
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 8.b scrub starts
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 8.b scrub ok
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 10.2 scrub starts
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 10.2 scrub ok
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 11.8 scrub starts
Oct 09 11:02:14 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 11.8 scrub ok
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 8.17 scrub starts
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 8.17 scrub ok
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 10.5 scrub starts
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 10.5 scrub ok
Oct 09 11:02:14 compute-2 ceph-mon[6044]: osdmap e81: 3 total, 3 up, 3 in
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 12.3 scrub starts
Oct 09 11:02:14 compute-2 ceph-mon[6044]: 12.3 scrub ok
Oct 09 11:02:14 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:14 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:15 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:15 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:15 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:15 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:15 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct 09 11:02:15 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[55,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:15 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[55,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:15 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[55,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:15 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=55/36 lis/c=55/55 les/c/f=56/56/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[55,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:16 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:16 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:16 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:16 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:16 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:16 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:16 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:16 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:16 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:16 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:16.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:16 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:16 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4009ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:16 compute-2 ceph-mon[6044]: pgmap v19: 353 pgs: 2 unknown, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:16 compute-2 ceph-mon[6044]: 11.14 scrub starts
Oct 09 11:02:16 compute-2 ceph-mon[6044]: 11.14 scrub ok
Oct 09 11:02:16 compute-2 ceph-mon[6044]: osdmap e82: 3 total, 3 up, 3 in
Oct 09 11:02:17 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:17 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:17 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:17 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:17 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct 09 11:02:17 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 83 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=83) [2] r=0 lpr=83 pi=[55,83)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:17 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 83 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=83) [2] r=0 lpr=83 pi=[55,83)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:17 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:17 compute-2 sudo[18919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 11:02:17 compute-2 sudo[18919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:17 compute-2 sudo[18919]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:17 compute-2 sudo[18944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 11:02:17 compute-2 sudo[18944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:17 compute-2 sudo[18944]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:17 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct 09 11:02:17 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 84 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=84) [2] r=0 lpr=84 pi=[55,84)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:17 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 84 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=6 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=84) [2] r=0 lpr=84 pi=[55,84)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:17 compute-2 ceph-mon[6044]: 11.1 scrub starts
Oct 09 11:02:17 compute-2 ceph-mon[6044]: 11.1 scrub ok
Oct 09 11:02:17 compute-2 ceph-mon[6044]: pgmap v21: 353 pgs: 2 activating+remapped, 2 unknown, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 0 B/s wr, 7 op/s; 11/204 objects misplaced (5.392%)
Oct 09 11:02:17 compute-2 ceph-mon[6044]: 8.18 deep-scrub starts
Oct 09 11:02:17 compute-2 ceph-mon[6044]: 8.18 deep-scrub ok
Oct 09 11:02:17 compute-2 ceph-mon[6044]: osdmap e83: 3 total, 3 up, 3 in
Oct 09 11:02:17 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:17 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:17 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:17 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 11:02:17 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 11:02:17 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:17 compute-2 ceph-mon[6044]: osdmap e84: 3 total, 3 up, 3 in
Oct 09 11:02:17 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 84 pg[9.18( v 42'1020 (0'0,42'1020] local-lis/les=83/84 n=5 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=83) [2] r=0 lpr=83 pi=[55,83)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:02:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:18 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:18 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:18 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:18 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:18 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:18 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:18.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:18 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:18 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:18 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:18 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:18 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct 09 11:02:18 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 85 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85) [2] r=0 lpr=85 pi=[55,85)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:18 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 85 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85) [2] r=0 lpr=85 pi=[55,85)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:18 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 85 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85) [2] r=0 lpr=85 pi=[55,85)/1 luod=0'0 crt=42'1020 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:18 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 85 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=0/0 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85) [2] r=0 lpr=85 pi=[55,85)/1 crt=42'1020 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:18 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 85 pg[9.8( v 42'1020 (0'0,42'1020] local-lis/les=84/85 n=6 ec=55/36 lis/c=80/55 les/c/f=81/56/0 sis=84) [2] r=0 lpr=84 pi=[55,84)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:02:18 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:18 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:19 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:19 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:19 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:19 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:19 compute-2 ceph-mon[6044]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 09 11:02:19 compute-2 ceph-mon[6044]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 09 11:02:19 compute-2 ceph-mon[6044]: 11.1c scrub starts
Oct 09 11:02:19 compute-2 ceph-mon[6044]: 11.1c scrub ok
Oct 09 11:02:19 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:19 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:19 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.izrudc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 11:02:19 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 11:02:19 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:19 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 09 11:02:19 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 09 11:02:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:20 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:20 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:20 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4009ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:20 compute-2 ceph-mon[6044]: Reconfiguring mgr.compute-0.izrudc (monmap changed)...
Oct 09 11:02:20 compute-2 ceph-mon[6044]: Reconfiguring daemon mgr.compute-0.izrudc on compute-0
Oct 09 11:02:20 compute-2 ceph-mon[6044]: pgmap v24: 353 pgs: 2 activating+remapped, 2 unknown, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 0 B/s wr, 11 op/s; 11/204 objects misplaced (5.392%)
Oct 09 11:02:20 compute-2 ceph-mon[6044]: 11.1d scrub starts
Oct 09 11:02:20 compute-2 ceph-mon[6044]: 11.1d scrub ok
Oct 09 11:02:20 compute-2 ceph-mon[6044]: osdmap e85: 3 total, 3 up, 3 in
Oct 09 11:02:20 compute-2 ceph-mon[6044]: 9.8 scrub starts
Oct 09 11:02:20 compute-2 ceph-mon[6044]: 9.8 scrub ok
Oct 09 11:02:20 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:20 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:20 compute-2 ceph-mon[6044]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 09 11:02:20 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 11:02:20 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:20 compute-2 ceph-mon[6044]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 09 11:02:20 compute-2 ceph-mon[6044]: 11.1e scrub starts
Oct 09 11:02:20 compute-2 ceph-mon[6044]: 11.1e scrub ok
Oct 09 11:02:20 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:20 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:20 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:20.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:20 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:20 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct 09 11:02:20 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 86 pg[9.9( v 42'1020 (0'0,42'1020] local-lis/les=85/86 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85) [2] r=0 lpr=85 pi=[55,85)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:02:20 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 86 pg[9.19( v 42'1020 (0'0,42'1020] local-lis/les=85/86 n=5 ec=55/36 lis/c=82/55 les/c/f=83/56/0 sis=85) [2] r=0 lpr=85 pi=[55,85)/1 crt=42'1020 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:02:20 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:20 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:20 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:20 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:20 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:21 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:21 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:21 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:21 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:21 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 09 11:02:21 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 09 11:02:21 compute-2 ceph-mon[6044]: 10.18 scrub starts
Oct 09 11:02:21 compute-2 ceph-mon[6044]: 10.18 scrub ok
Oct 09 11:02:21 compute-2 ceph-mon[6044]: osdmap e86: 3 total, 3 up, 3 in
Oct 09 11:02:21 compute-2 ceph-mon[6044]: pgmap v27: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct 09 11:02:21 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:21 compute-2 ceph-mon[6044]: 8.12 scrub starts
Oct 09 11:02:21 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:21 compute-2 ceph-mon[6044]: Reconfiguring osd.0 (monmap changed)...
Oct 09 11:02:21 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 09 11:02:21 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:21 compute-2 ceph-mon[6044]: Reconfiguring daemon osd.0 on compute-0
Oct 09 11:02:21 compute-2 ceph-mon[6044]: 8.12 scrub ok
Oct 09 11:02:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:22 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:22 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:22 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:22 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:22 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Oct 09 11:02:22 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Oct 09 11:02:22 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:22 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:22 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:22.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:22 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce4009ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 10.19 scrub starts
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 10.19 scrub ok
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 8.5 scrub starts
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 8.5 scrub ok
Oct 09 11:02:22 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 8.19 scrub starts
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 8.19 scrub ok
Oct 09 11:02:22 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:22 compute-2 ceph-mon[6044]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 09 11:02:22 compute-2 ceph-mon[6044]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 12.19 scrub starts
Oct 09 11:02:22 compute-2 ceph-mon[6044]: 12.19 scrub ok
Oct 09 11:02:22 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:22 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:22 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:22.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:22 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:22 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:23 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:23 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:23 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:23 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Oct 09 11:02:23 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Oct 09 11:02:23 compute-2 ceph-mon[6044]: 12.2 scrub starts
Oct 09 11:02:23 compute-2 ceph-mon[6044]: 12.2 scrub ok
Oct 09 11:02:23 compute-2 ceph-mon[6044]: pgmap v28: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 0 objects/s recovering
Oct 09 11:02:23 compute-2 ceph-mon[6044]: 11.4 scrub starts
Oct 09 11:02:23 compute-2 ceph-mon[6044]: 11.4 scrub ok
Oct 09 11:02:23 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:23 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:23 compute-2 ceph-mon[6044]: 10.1b scrub starts
Oct 09 11:02:23 compute-2 ceph-mon[6044]: 10.1b scrub ok
Oct 09 11:02:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:24 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:24 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:24 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:24 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:24 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:24 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:24.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:24 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:24 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 09 11:02:24 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 09 11:02:24 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:24 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:24 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:24.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:24 compute-2 ceph-mon[6044]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 09 11:02:24 compute-2 ceph-mon[6044]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 12.1e scrub starts
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 12.1e scrub ok
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 9.6 scrub starts
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 9.6 scrub ok
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 12.1c scrub starts
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 12.1c scrub ok
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 11.19 scrub starts
Oct 09 11:02:24 compute-2 ceph-mon[6044]: 11.19 scrub ok
Oct 09 11:02:24 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:24 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:24 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd4000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:25 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:25 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:25 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:25 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.1d scrub starts
Oct 09 11:02:25 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.1d scrub ok
Oct 09 11:02:25 compute-2 ceph-mon[6044]: pgmap v29: 353 pgs: 2 peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 16 B/s, 0 objects/s recovering
Oct 09 11:02:25 compute-2 ceph-mon[6044]: 9.1e scrub starts
Oct 09 11:02:25 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:25 compute-2 ceph-mon[6044]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 09 11:02:25 compute-2 ceph-mon[6044]: 9.1e scrub ok
Oct 09 11:02:25 compute-2 ceph-mon[6044]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 09 11:02:25 compute-2 ceph-mon[6044]: 12.10 scrub starts
Oct 09 11:02:25 compute-2 ceph-mon[6044]: 12.10 scrub ok
Oct 09 11:02:25 compute-2 ceph-mon[6044]: 12.1d scrub starts
Oct 09 11:02:25 compute-2 ceph-mon[6044]: 12.1d scrub ok
Oct 09 11:02:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:26 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:26 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:26 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:26 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:26 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:26 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:26.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:26 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:26 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Oct 09 11:02:26 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Oct 09 11:02:26 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:26 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000030s ======
Oct 09 11:02:26 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:26.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Oct 09 11:02:26 compute-2 ceph-mon[6044]: 10.14 scrub starts
Oct 09 11:02:26 compute-2 ceph-mon[6044]: 10.14 scrub ok
Oct 09 11:02:26 compute-2 ceph-mon[6044]: 10.1e deep-scrub starts
Oct 09 11:02:26 compute-2 ceph-mon[6044]: 10.1e deep-scrub ok
Oct 09 11:02:26 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 09 11:02:26 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:26 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:26 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 11:02:26 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:26 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct 09 11:02:26 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:26 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:27 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:27 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:27 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:27 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:27 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 09 11:02:27 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 09 11:02:27 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct 09 11:02:27 compute-2 ceph-mon[6044]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 47 B/s, 0 objects/s recovering
Oct 09 11:02:27 compute-2 ceph-mon[6044]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 09 11:02:27 compute-2 ceph-mon[6044]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 09 11:02:27 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 09 11:02:27 compute-2 ceph-mon[6044]: osdmap e87: 3 total, 3 up, 3 in
Oct 09 11:02:27 compute-2 ceph-mon[6044]: 12.12 deep-scrub starts
Oct 09 11:02:27 compute-2 ceph-mon[6044]: 12.12 deep-scrub ok
Oct 09 11:02:27 compute-2 ceph-mon[6044]: 10.4 scrub starts
Oct 09 11:02:27 compute-2 ceph-mon[6044]: 10.4 scrub ok
Oct 09 11:02:27 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:27 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:27 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 09 11:02:27 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:27 compute-2 ceph-mon[6044]: osdmap e88: 3 total, 3 up, 3 in
Oct 09 11:02:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:28 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:28 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:28 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd40016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:28 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct 09 11:02:28 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:28 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:28 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:28.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:28 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:28 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct 09 11:02:28 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:28 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:28 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:28.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:28 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct 09 11:02:28 compute-2 ceph-mon[6044]: Reconfiguring osd.1 (monmap changed)...
Oct 09 11:02:28 compute-2 ceph-mon[6044]: Reconfiguring daemon osd.1 on compute-1
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:28 compute-2 ceph-mon[6044]: 12.6 scrub starts
Oct 09 11:02:28 compute-2 ceph-mon[6044]: 12.6 scrub ok
Oct 09 11:02:28 compute-2 ceph-mon[6044]: 8.1f scrub starts
Oct 09 11:02:28 compute-2 ceph-mon[6044]: 8.1f scrub ok
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 09 11:02:28 compute-2 ceph-mon[6044]: osdmap e89: 3 total, 3 up, 3 in
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:28 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:28 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:28 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:29 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:29 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:29 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:29 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:29 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 09 11:02:29 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 09 11:02:29 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct 09 11:02:30 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:30 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:30 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:30 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:30 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:30 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:30 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:30 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:30 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:30.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:30 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:30 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd40016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:30 compute-2 ceph-mon[6044]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 09 11:02:30 compute-2 ceph-mon[6044]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 09 11:02:30 compute-2 ceph-mon[6044]: pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Oct 09 11:02:30 compute-2 ceph-mon[6044]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Oct 09 11:02:30 compute-2 ceph-mon[6044]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Oct 09 11:02:30 compute-2 ceph-mon[6044]: 12.e scrub starts
Oct 09 11:02:30 compute-2 ceph-mon[6044]: 12.e scrub ok
Oct 09 11:02:30 compute-2 ceph-mon[6044]: 8.1c scrub starts
Oct 09 11:02:30 compute-2 ceph-mon[6044]: 8.1c scrub ok
Oct 09 11:02:30 compute-2 ceph-mon[6044]: osdmap e90: 3 total, 3 up, 3 in
Oct 09 11:02:30 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Oct 09 11:02:30 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Oct 09 11:02:30 compute-2 sshd-session[18983]: Accepted publickey for zuul from 192.168.122.10 port 37112 ssh2: ECDSA SHA256:RRIwAVyoA3iw56JIY0LmsrTgy+NWFNam8Udacp+6pQ4
Oct 09 11:02:30 compute-2 systemd-logind[844]: New session 21 of user zuul.
Oct 09 11:02:30 compute-2 systemd[1]: Started Session 21 of User zuul.
Oct 09 11:02:30 compute-2 sshd-session[18983]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 11:02:30 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:30 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:30 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:30.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:30 compute-2 sudo[18987]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 09 11:02:30 compute-2 sudo[18987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 11:02:30 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:30 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:31 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:31 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:31 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:31 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:31 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct 09 11:02:31 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.f deep-scrub starts
Oct 09 11:02:31 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.f deep-scrub ok
Oct 09 11:02:31 compute-2 ceph-mon[6044]: 9.c scrub starts
Oct 09 11:02:31 compute-2 ceph-mon[6044]: 9.c scrub ok
Oct 09 11:02:31 compute-2 ceph-mon[6044]: pgmap v36: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 0 objects/s recovering
Oct 09 11:02:31 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 09 11:02:31 compute-2 ceph-mon[6044]: 12.7 scrub starts
Oct 09 11:02:31 compute-2 ceph-mon[6044]: 12.7 scrub ok
Oct 09 11:02:31 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 09 11:02:31 compute-2 ceph-mon[6044]: osdmap e91: 3 total, 3 up, 3 in
Oct 09 11:02:32 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:32 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:32 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:32 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:32 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:32 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:32 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:32 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:32 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:32 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:32.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:32 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:32 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:32 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 09 11:02:32 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 09 11:02:32 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:32 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:32 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:32.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 9.0 scrub starts
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 9.0 scrub ok
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 8.f deep-scrub starts
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 8.f deep-scrub ok
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 9.a scrub starts
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 9.a scrub ok
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 9.1 deep-scrub starts
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 9.1 deep-scrub ok
Oct 09 11:02:32 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 10.3 scrub starts
Oct 09 11:02:32 compute-2 ceph-mon[6044]: 10.3 scrub ok
Oct 09 11:02:32 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:32 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd40016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:32 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct 09 11:02:33 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 92 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=71/72 n=6 ec=55/36 lis/c=71/71 les/c/f=72/72/0 sis=92 pruub=9.612013817s) [1] r=-1 lpr=92 pi=[71,92)/1 crt=42'1020 mlcod 0'0 active pruub 184.174560547s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:33 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 92 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=71/72 n=6 ec=55/36 lis/c=71/71 les/c/f=72/72/0 sis=92 pruub=9.611967087s) [1] r=-1 lpr=92 pi=[71,92)/1 crt=42'1020 mlcod 0'0 unknown NOTIFY pruub 184.174560547s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:33 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 92 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=70/71 n=5 ec=55/36 lis/c=70/70 les/c/f=71/71/0 sis=92 pruub=8.604992867s) [1] r=-1 lpr=92 pi=[70,92)/1 crt=42'1020 mlcod 0'0 active pruub 183.168640137s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:33 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 92 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=70/71 n=5 ec=55/36 lis/c=70/70 les/c/f=71/71/0 sis=92 pruub=8.604966164s) [1] r=-1 lpr=92 pi=[70,92)/1 crt=42'1020 mlcod 0'0 unknown NOTIFY pruub 183.168640137s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:33 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:33 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:33 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:33 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:33 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.1a scrub starts
Oct 09 11:02:33 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.1a scrub ok
Oct 09 11:02:33 compute-2 sudo[19169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:02:33 compute-2 sudo[19169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:33 compute-2 sudo[19169]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:33 compute-2 sudo[19194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:02:33 compute-2 sudo[19194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:33 compute-2 ceph-mon[6044]: pgmap v38: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 46 B/s, 0 objects/s recovering
Oct 09 11:02:33 compute-2 ceph-mon[6044]: 9.1a scrub starts
Oct 09 11:02:33 compute-2 ceph-mon[6044]: 9.1a scrub ok
Oct 09 11:02:33 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 09 11:02:33 compute-2 ceph-mon[6044]: osdmap e92: 3 total, 3 up, 3 in
Oct 09 11:02:33 compute-2 ceph-mon[6044]: 9.4 scrub starts
Oct 09 11:02:33 compute-2 ceph-mon[6044]: 9.4 scrub ok
Oct 09 11:02:33 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:33 compute-2 ceph-mon[6044]: 12.1a scrub starts
Oct 09 11:02:33 compute-2 ceph-mon[6044]: 12.1a scrub ok
Oct 09 11:02:33 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:33 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 11:02:33 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 11:02:33 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:33 compute-2 podman[19237]: 2025-10-09 11:02:33.99153268 +0000 UTC m=+0.036443039 container create 6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_banach, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 11:02:34 compute-2 systemd[13570]: Starting Mark boot as successful...
Oct 09 11:02:34 compute-2 systemd[1]: Started libpod-conmon-6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e.scope.
Oct 09 11:02:34 compute-2 systemd[13570]: Finished Mark boot as successful.
Oct 09 11:02:34 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:02:34 compute-2 podman[19237]: 2025-10-09 11:02:34.042337305 +0000 UTC m=+0.087247704 container init 6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_banach, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 11:02:34 compute-2 podman[19237]: 2025-10-09 11:02:34.04873012 +0000 UTC m=+0.093640489 container start 6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 09 11:02:34 compute-2 podman[19237]: 2025-10-09 11:02:34.051937113 +0000 UTC m=+0.096847502 container attach 6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Oct 09 11:02:34 compute-2 quizzical_banach[19254]: 167 167
Oct 09 11:02:34 compute-2 systemd[1]: libpod-6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e.scope: Deactivated successfully.
Oct 09 11:02:34 compute-2 podman[19237]: 2025-10-09 11:02:34.053515339 +0000 UTC m=+0.098425728 container died 6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_banach, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Oct 09 11:02:34 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:34 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:34 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:34 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:34 compute-2 podman[19237]: 2025-10-09 11:02:33.975077462 +0000 UTC m=+0.019987861 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 11:02:34 compute-2 systemd[1]: var-lib-containers-storage-overlay-f36920dae8e342a0ce6dc6bae55157e7c15fa8ce3a58b6a550a82ab4a70239fd-merged.mount: Deactivated successfully.
Oct 09 11:02:34 compute-2 podman[19237]: 2025-10-09 11:02:34.0886764 +0000 UTC m=+0.133586769 container remove 6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_banach, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 11:02:34 compute-2 systemd[1]: libpod-conmon-6417e3747fc05fd2bab8712692bf3c7d16d7ef53ff29cd9bcb905e0fca0e944e.scope: Deactivated successfully.
Oct 09 11:02:34 compute-2 sudo[19194]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:34 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:34 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:34 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct 09 11:02:34 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 93 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=71/72 n=6 ec=55/36 lis/c=71/71 les/c/f=72/72/0 sis=93) [1]/[2] r=0 lpr=93 pi=[71,93)/1 crt=42'1020 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:34 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 93 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=71/72 n=6 ec=55/36 lis/c=71/71 les/c/f=72/72/0 sis=93) [1]/[2] r=0 lpr=93 pi=[71,93)/1 crt=42'1020 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:34 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 93 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=70/71 n=5 ec=55/36 lis/c=70/70 les/c/f=71/71/0 sis=93) [1]/[2] r=0 lpr=93 pi=[70,93)/1 crt=42'1020 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:34 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 93 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=70/71 n=5 ec=55/36 lis/c=70/70 les/c/f=71/71/0 sis=93) [1]/[2] r=0 lpr=93 pi=[70,93)/1 crt=42'1020 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 11:02:34 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:34 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 09 11:02:34 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 09 11:02:34 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:34 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:34 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 09 11:02:34 compute-2 sudo[19270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 11:02:34 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 09 11:02:34 compute-2 sudo[19270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:34 compute-2 sudo[19270]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:34 compute-2 sudo[19295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/e990987d-9393-5e96-99ae-9e3a3319f191/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid e990987d-9393-5e96-99ae-9e3a3319f191
Oct 09 11:02:34 compute-2 sudo[19295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:34 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:34 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:34 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:34 compute-2 podman[19336]: 2025-10-09 11:02:34.760304233 +0000 UTC m=+0.037221041 container create 6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jackson, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 11:02:34 compute-2 ceph-mon[6044]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 09 11:02:34 compute-2 ceph-mon[6044]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 09 11:02:34 compute-2 ceph-mon[6044]: 9.1c scrub starts
Oct 09 11:02:34 compute-2 ceph-mon[6044]: 9.1c scrub ok
Oct 09 11:02:34 compute-2 ceph-mon[6044]: osdmap e93: 3 total, 3 up, 3 in
Oct 09 11:02:34 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:34 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 09 11:02:34 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:34 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.agiurv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 11:02:34 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 11:02:34 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 11:02:34 compute-2 ceph-mon[6044]: 8.6 scrub starts
Oct 09 11:02:34 compute-2 ceph-mon[6044]: 8.6 scrub ok
Oct 09 11:02:34 compute-2 systemd[1]: Started libpod-conmon-6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252.scope.
Oct 09 11:02:34 compute-2 systemd[1]: Started libcrun container.
Oct 09 11:02:34 compute-2 podman[19336]: 2025-10-09 11:02:34.834518517 +0000 UTC m=+0.111435345 container init 6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 09 11:02:34 compute-2 podman[19336]: 2025-10-09 11:02:34.84046824 +0000 UTC m=+0.117385038 container start 6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jackson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 11:02:34 compute-2 podman[19336]: 2025-10-09 11:02:34.745074451 +0000 UTC m=+0.021991279 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 11:02:34 compute-2 loving_jackson[19352]: 167 167
Oct 09 11:02:34 compute-2 podman[19336]: 2025-10-09 11:02:34.843794227 +0000 UTC m=+0.120711065 container attach 6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jackson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 11:02:34 compute-2 systemd[1]: libpod-6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252.scope: Deactivated successfully.
Oct 09 11:02:34 compute-2 podman[19336]: 2025-10-09 11:02:34.844487057 +0000 UTC m=+0.121403865 container died 6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jackson, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 09 11:02:34 compute-2 systemd[1]: var-lib-containers-storage-overlay-28e14037bdf1191d1428a52a5bb67bcfdca1a4605aa60a63b97c4d27f150ef62-merged.mount: Deactivated successfully.
Oct 09 11:02:34 compute-2 podman[19336]: 2025-10-09 11:02:34.881965015 +0000 UTC m=+0.158881823 container remove 6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 11:02:34 compute-2 systemd[1]: libpod-conmon-6b37a6bc3e0fe7d3747ca8a76603193e252e8783141488970cc93c88240eb252.scope: Deactivated successfully.
Oct 09 11:02:34 compute-2 sudo[19295]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:34 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:34 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:35 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:35 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:35 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:35 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct 09 11:02:35 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.18 scrub starts
Oct 09 11:02:35 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 12.18 scrub ok
Oct 09 11:02:35 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 94 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=93/94 n=5 ec=55/36 lis/c=70/70 les/c/f=71/71/0 sis=93) [1]/[2] async=[1] r=0 lpr=93 pi=[70,93)/1 crt=42'1020 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:02:35 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 94 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=93/94 n=6 ec=55/36 lis/c=71/71 les/c/f=72/72/0 sis=93) [1]/[2] async=[1] r=0 lpr=93 pi=[71,93)/1 crt=42'1020 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 11:02:35 compute-2 ceph-mon[6044]: pgmap v41: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:35 compute-2 ceph-mon[6044]: Reconfiguring mgr.compute-2.agiurv (monmap changed)...
Oct 09 11:02:35 compute-2 ceph-mon[6044]: Reconfiguring daemon mgr.compute-2.agiurv on compute-2
Oct 09 11:02:35 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:35 compute-2 ceph-mon[6044]: 9.12 deep-scrub starts
Oct 09 11:02:35 compute-2 ceph-mon[6044]: 9.12 deep-scrub ok
Oct 09 11:02:35 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:35 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 09 11:02:35 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 09 11:02:35 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 09 11:02:35 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:35 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 09 11:02:35 compute-2 ceph-mon[6044]: osdmap e94: 3 total, 3 up, 3 in
Oct 09 11:02:35 compute-2 ceph-mon[6044]: 12.18 scrub starts
Oct 09 11:02:35 compute-2 ceph-mon[6044]: 12.18 scrub ok
Oct 09 11:02:35 compute-2 ovs-vsctl[19398]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 09 11:02:36 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:36 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:36 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:36 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:36 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:36 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd4002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:36 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:36 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:36 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:36 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:36 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:36 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:36 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:36 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:36 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct 09 11:02:36 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 95 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=93/94 n=5 ec=55/36 lis/c=93/70 les/c/f=94/71/0 sis=95 pruub=14.725389481s) [1] async=[1] r=-1 lpr=95 pi=[70,95)/1 crt=42'1020 mlcod 42'1020 active pruub 193.215774536s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:36 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 95 pg[9.1d( v 42'1020 (0'0,42'1020] local-lis/les=93/94 n=5 ec=55/36 lis/c=93/70 les/c/f=94/71/0 sis=95 pruub=14.725299835s) [1] r=-1 lpr=95 pi=[70,95)/1 crt=42'1020 mlcod 0'0 unknown NOTIFY pruub 193.215774536s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:36 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:36 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:37 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:37 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:37 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:37 compute-2 lvm[19709]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 11:02:37 compute-2 lvm[19709]: VG ceph_vg0 finished
Oct 09 11:02:37 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:37 compute-2 kernel: block vda: the capability attribute has been deprecated.
Oct 09 11:02:37 compute-2 sudo[19792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 11:02:37 compute-2 sudo[19792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 11:02:37 compute-2 sudo[19792]: pam_unix(sudo:session): session closed for user root
Oct 09 11:02:37 compute-2 ceph-mon[6044]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 09 11:02:37 compute-2 ceph-mon[6044]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 09 11:02:37 compute-2 ceph-mon[6044]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 09 11:02:37 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct 09 11:02:38 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:38 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:38 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:38 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:38 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:38 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:38 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:38 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:38 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:38.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:38 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:38 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd4002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:38 compute-2 crontab[20153]: (root) LIST (root)
Oct 09 11:02:38 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:38 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:38 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:38.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Oct 09 11:02:38 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:38 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:39 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:39 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:39 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:39 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:39 compute-2 ceph-mon[6044]: pgmap v43: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:39 compute-2 ceph-mon[6044]: osdmap e95: 3 total, 3 up, 3 in
Oct 09 11:02:39 compute-2 ceph-mon[6044]: osdmap e96: 3 total, 3 up, 3 in
Oct 09 11:02:40 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:40 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:40 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:40 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:40 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:40 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:40 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:40 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:40 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:40.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:40 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:40 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd8002d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:40 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:40 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:40 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:40 compute-2 systemd[1]: Starting Hostname Service...
Oct 09 11:02:40 compute-2 systemd[1]: Started Hostname Service.
Oct 09 11:02:40 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:40 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cd4002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:41 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct 09 11:02:41 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 97 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=93/94 n=6 ec=55/36 lis/c=93/71 les/c/f=94/72/0 sis=97 pruub=10.755433083s) [1] async=[1] r=-1 lpr=97 pi=[71,97)/1 crt=42'1020 mlcod 42'1020 active pruub 193.375442505s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 11:02:41 compute-2 ceph-osd[8575]: osd.2 pg_epoch: 97 pg[9.d( v 42'1020 (0'0,42'1020] local-lis/les=93/94 n=6 ec=55/36 lis/c=93/71 les/c/f=94/72/0 sis=97 pruub=10.755350113s) [1] r=-1 lpr=97 pi=[71,97)/1 crt=42'1020 mlcod 0'0 unknown NOTIFY pruub 193.375442505s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 11:02:41 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:41 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:41 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:41 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:41 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct 09 11:02:41 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct 09 11:02:41 compute-2 ceph-mon[6044]: pgmap v46: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 11:02:41 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:41 compute-2 ceph-mon[6044]: from='mgr.14712 192.168.122.100:0/3554718281' entity='mgr.compute-0.izrudc' 
Oct 09 11:02:42 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-rgw-default-compute-2-txrqnp[17237]: Thu Oct  9 11:02:42 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:42 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-keepalived-nfs-cephfs-compute-2-dxpkeo[16513]: Thu Oct  9 11:02:42 2025: (VI_0) received an invalid passwd!
Oct 09 11:02:42 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct 09 11:02:42 compute-2 ceph-mon[6044]: mon.compute-2@1(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 11:02:42 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:42 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1cc0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:42 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:42 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 11:02:42 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.102 - anonymous [09/Oct/2025:11:02:42.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 11:02:42 compute-2 ceph-e990987d-9393-5e96-99ae-9e3a3319f191-nfs-cephfs-1-0-compute-2-mtmthg[16642]: 09/10/2025 11:02:42 : epoch 68e79611 : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1ce00029e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 11:02:42 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 09 11:02:42 compute-2 ceph-osd[8575]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 09 11:02:42 compute-2 radosgw[11550]: ====== starting new request req=0x7f23f6d2a5d0 =====
Oct 09 11:02:42 compute-2 radosgw[11550]: ====== req done req=0x7f23f6d2a5d0 op status=0 http_status=200 latency=0.001000029s ======
Oct 09 11:02:42 compute-2 radosgw[11550]: beast: 0x7f23f6d2a5d0: 192.168.122.100 - anonymous [09/Oct/2025:11:02:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
