Dec 11 08:30:15 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 11 08:30:15 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 11 08:30:15 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 08:30:15 localhost kernel: BIOS-provided physical RAM map:
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 11 08:30:15 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 11 08:30:15 localhost kernel: NX (Execute Disable) protection: active
Dec 11 08:30:15 localhost kernel: APIC: Static calls initialized
Dec 11 08:30:15 localhost kernel: SMBIOS 2.8 present.
Dec 11 08:30:15 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 11 08:30:15 localhost kernel: Hypervisor detected: KVM
Dec 11 08:30:15 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 11 08:30:15 localhost kernel: kvm-clock: using sched offset of 3183053485 cycles
Dec 11 08:30:15 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 11 08:30:15 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 11 08:30:15 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 11 08:30:15 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 11 08:30:15 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 11 08:30:15 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 11 08:30:15 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 11 08:30:15 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 11 08:30:15 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 11 08:30:15 localhost kernel: Using GB pages for direct mapping
Dec 11 08:30:15 localhost kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 11 08:30:15 localhost kernel: ACPI: Early table checksum verification disabled
Dec 11 08:30:15 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 11 08:30:15 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:15 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:15 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:15 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 11 08:30:15 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:15 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 11 08:30:15 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 11 08:30:15 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 11 08:30:15 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 11 08:30:15 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 11 08:30:15 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 11 08:30:15 localhost kernel: No NUMA configuration found
Dec 11 08:30:15 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 11 08:30:15 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 11 08:30:15 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 11 08:30:15 localhost kernel: Zone ranges:
Dec 11 08:30:15 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 11 08:30:15 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 11 08:30:15 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 11 08:30:15 localhost kernel:   Device   empty
Dec 11 08:30:15 localhost kernel: Movable zone start for each node
Dec 11 08:30:15 localhost kernel: Early memory node ranges
Dec 11 08:30:15 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 11 08:30:15 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 11 08:30:15 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 11 08:30:15 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 11 08:30:15 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 11 08:30:15 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 11 08:30:15 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 11 08:30:15 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 11 08:30:15 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 11 08:30:15 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 11 08:30:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 11 08:30:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 11 08:30:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 11 08:30:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 11 08:30:15 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 11 08:30:15 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 11 08:30:15 localhost kernel: TSC deadline timer available
Dec 11 08:30:15 localhost kernel: CPU topo: Max. logical packages:   8
Dec 11 08:30:15 localhost kernel: CPU topo: Max. logical dies:       8
Dec 11 08:30:15 localhost kernel: CPU topo: Max. dies per package:   1
Dec 11 08:30:15 localhost kernel: CPU topo: Max. threads per core:   1
Dec 11 08:30:15 localhost kernel: CPU topo: Num. cores per package:     1
Dec 11 08:30:15 localhost kernel: CPU topo: Num. threads per package:   1
Dec 11 08:30:15 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 11 08:30:15 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 11 08:30:15 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 11 08:30:15 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 11 08:30:15 localhost kernel: Booting paravirtualized kernel on KVM
Dec 11 08:30:15 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 11 08:30:15 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 11 08:30:15 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 11 08:30:15 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 11 08:30:15 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 11 08:30:15 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 11 08:30:15 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 08:30:15 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 11 08:30:15 localhost kernel: random: crng init done
Dec 11 08:30:15 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 11 08:30:15 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 11 08:30:15 localhost kernel: Fallback order for Node 0: 0 
Dec 11 08:30:15 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 11 08:30:15 localhost kernel: Policy zone: Normal
Dec 11 08:30:15 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 11 08:30:15 localhost kernel: software IO TLB: area num 8.
Dec 11 08:30:15 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 11 08:30:15 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 11 08:30:15 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 11 08:30:15 localhost kernel: Dynamic Preempt: voluntary
Dec 11 08:30:15 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 11 08:30:15 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 11 08:30:15 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 11 08:30:15 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 11 08:30:15 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 11 08:30:15 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 11 08:30:15 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 11 08:30:15 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 11 08:30:15 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 08:30:15 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 08:30:15 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 11 08:30:15 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 11 08:30:15 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 11 08:30:15 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 11 08:30:15 localhost kernel: Console: colour VGA+ 80x25
Dec 11 08:30:15 localhost kernel: printk: console [ttyS0] enabled
Dec 11 08:30:15 localhost kernel: ACPI: Core revision 20230331
Dec 11 08:30:15 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 11 08:30:15 localhost kernel: x2apic enabled
Dec 11 08:30:15 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 11 08:30:15 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 11 08:30:15 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 11 08:30:15 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 11 08:30:15 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 11 08:30:15 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 11 08:30:15 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 11 08:30:15 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 11 08:30:15 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 11 08:30:15 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 11 08:30:15 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 11 08:30:15 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 11 08:30:15 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 11 08:30:15 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 11 08:30:15 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 11 08:30:15 localhost kernel: x86/bugs: return thunk changed
Dec 11 08:30:15 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 11 08:30:15 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 11 08:30:15 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 11 08:30:15 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 11 08:30:15 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 11 08:30:15 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 11 08:30:15 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 11 08:30:15 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 11 08:30:15 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 11 08:30:15 localhost kernel: landlock: Up and running.
Dec 11 08:30:15 localhost kernel: Yama: becoming mindful.
Dec 11 08:30:15 localhost kernel: SELinux:  Initializing.
Dec 11 08:30:15 localhost kernel: LSM support for eBPF active
Dec 11 08:30:15 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 11 08:30:15 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 11 08:30:15 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 11 08:30:15 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 11 08:30:15 localhost kernel: ... version:                0
Dec 11 08:30:15 localhost kernel: ... bit width:              48
Dec 11 08:30:15 localhost kernel: ... generic registers:      6
Dec 11 08:30:15 localhost kernel: ... value mask:             0000ffffffffffff
Dec 11 08:30:15 localhost kernel: ... max period:             00007fffffffffff
Dec 11 08:30:15 localhost kernel: ... fixed-purpose events:   0
Dec 11 08:30:15 localhost kernel: ... event mask:             000000000000003f
Dec 11 08:30:15 localhost kernel: signal: max sigframe size: 1776
Dec 11 08:30:15 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 11 08:30:15 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 11 08:30:15 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 11 08:30:15 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 11 08:30:15 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 11 08:30:15 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 11 08:30:15 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 11 08:30:15 localhost kernel: node 0 deferred pages initialised in 9ms
Dec 11 08:30:15 localhost kernel: Memory: 7764032K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618220K reserved, 0K cma-reserved)
Dec 11 08:30:15 localhost kernel: devtmpfs: initialized
Dec 11 08:30:15 localhost kernel: x86/mm: Memory block size: 128MB
Dec 11 08:30:15 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 11 08:30:15 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 11 08:30:15 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 11 08:30:15 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 11 08:30:15 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 11 08:30:15 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 11 08:30:15 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 11 08:30:15 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 11 08:30:15 localhost kernel: audit: type=2000 audit(1765441813.923:1): state=initialized audit_enabled=0 res=1
Dec 11 08:30:15 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 11 08:30:15 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 11 08:30:15 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 11 08:30:15 localhost kernel: cpuidle: using governor menu
Dec 11 08:30:15 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 11 08:30:15 localhost kernel: PCI: Using configuration type 1 for base access
Dec 11 08:30:15 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 11 08:30:15 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 11 08:30:15 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 11 08:30:15 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 11 08:30:15 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 11 08:30:15 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 11 08:30:15 localhost kernel: Demotion targets for Node 0: null
Dec 11 08:30:15 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 11 08:30:15 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 11 08:30:15 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 11 08:30:15 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 11 08:30:15 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 11 08:30:15 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 11 08:30:15 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 11 08:30:15 localhost kernel: ACPI: Interpreter enabled
Dec 11 08:30:15 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 11 08:30:15 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 11 08:30:15 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 11 08:30:15 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 11 08:30:15 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 11 08:30:15 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 11 08:30:15 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [3] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [4] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [5] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [6] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [7] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [8] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [9] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [10] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [11] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [12] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [13] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [14] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [15] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [16] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [17] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [18] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [19] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [20] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [21] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [22] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [23] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [24] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [25] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [26] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [27] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [28] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [29] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [30] registered
Dec 11 08:30:15 localhost kernel: acpiphp: Slot [31] registered
Dec 11 08:30:15 localhost kernel: PCI host bridge to bus 0000:00
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 11 08:30:15 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 11 08:30:15 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 11 08:30:15 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 11 08:30:15 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 11 08:30:15 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 11 08:30:15 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 11 08:30:15 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 11 08:30:15 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 11 08:30:15 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 11 08:30:15 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 11 08:30:15 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 11 08:30:15 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 11 08:30:15 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 11 08:30:15 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 11 08:30:15 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 11 08:30:15 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 11 08:30:15 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 11 08:30:15 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 11 08:30:15 localhost kernel: iommu: Default domain type: Translated
Dec 11 08:30:15 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 11 08:30:15 localhost kernel: SCSI subsystem initialized
Dec 11 08:30:15 localhost kernel: ACPI: bus type USB registered
Dec 11 08:30:15 localhost kernel: usbcore: registered new interface driver usbfs
Dec 11 08:30:15 localhost kernel: usbcore: registered new interface driver hub
Dec 11 08:30:15 localhost kernel: usbcore: registered new device driver usb
Dec 11 08:30:15 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 11 08:30:15 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 11 08:30:15 localhost kernel: PTP clock support registered
Dec 11 08:30:15 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 11 08:30:15 localhost kernel: NetLabel: Initializing
Dec 11 08:30:15 localhost kernel: NetLabel:  domain hash size = 128
Dec 11 08:30:15 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 11 08:30:15 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 11 08:30:15 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 11 08:30:15 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 11 08:30:15 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 11 08:30:15 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 11 08:30:15 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 11 08:30:15 localhost kernel: vgaarb: loaded
Dec 11 08:30:15 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 11 08:30:15 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 11 08:30:15 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 11 08:30:15 localhost kernel: pnp: PnP ACPI init
Dec 11 08:30:15 localhost kernel: pnp 00:03: [dma 2]
Dec 11 08:30:15 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 11 08:30:15 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 11 08:30:15 localhost kernel: NET: Registered PF_INET protocol family
Dec 11 08:30:15 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 11 08:30:15 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 11 08:30:15 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 11 08:30:15 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 11 08:30:15 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 11 08:30:15 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 11 08:30:15 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 11 08:30:15 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 11 08:30:15 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 11 08:30:15 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 11 08:30:15 localhost kernel: NET: Registered PF_XDP protocol family
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 11 08:30:15 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 11 08:30:15 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 11 08:30:15 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 11 08:30:15 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 78646 usecs
Dec 11 08:30:15 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 11 08:30:15 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 11 08:30:15 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 11 08:30:15 localhost kernel: ACPI: bus type thunderbolt registered
Dec 11 08:30:15 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 11 08:30:15 localhost kernel: Initialise system trusted keyrings
Dec 11 08:30:15 localhost kernel: Key type blacklist registered
Dec 11 08:30:15 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 11 08:30:15 localhost kernel: zbud: loaded
Dec 11 08:30:15 localhost kernel: integrity: Platform Keyring initialized
Dec 11 08:30:15 localhost kernel: integrity: Machine keyring initialized
Dec 11 08:30:15 localhost kernel: Freeing initrd memory: 87820K
Dec 11 08:30:15 localhost kernel: NET: Registered PF_ALG protocol family
Dec 11 08:30:15 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 11 08:30:15 localhost kernel: Key type asymmetric registered
Dec 11 08:30:15 localhost kernel: Asymmetric key parser 'x509' registered
Dec 11 08:30:15 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 11 08:30:15 localhost kernel: io scheduler mq-deadline registered
Dec 11 08:30:15 localhost kernel: io scheduler kyber registered
Dec 11 08:30:15 localhost kernel: io scheduler bfq registered
Dec 11 08:30:15 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 11 08:30:15 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 11 08:30:15 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 11 08:30:15 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 11 08:30:15 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 11 08:30:15 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 11 08:30:15 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 11 08:30:15 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 11 08:30:15 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 11 08:30:15 localhost kernel: Non-volatile memory driver v1.3
Dec 11 08:30:15 localhost kernel: rdac: device handler registered
Dec 11 08:30:15 localhost kernel: hp_sw: device handler registered
Dec 11 08:30:15 localhost kernel: emc: device handler registered
Dec 11 08:30:15 localhost kernel: alua: device handler registered
Dec 11 08:30:15 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 11 08:30:15 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 11 08:30:15 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 11 08:30:15 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 11 08:30:15 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 11 08:30:15 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 11 08:30:15 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 11 08:30:15 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 11 08:30:15 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 11 08:30:15 localhost kernel: hub 1-0:1.0: USB hub found
Dec 11 08:30:15 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 11 08:30:15 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 11 08:30:15 localhost kernel: usbserial: USB Serial support registered for generic
Dec 11 08:30:15 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 11 08:30:15 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 11 08:30:15 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 11 08:30:15 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 11 08:30:15 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 11 08:30:15 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 11 08:30:15 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 11 08:30:15 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-11T08:30:14 UTC (1765441814)
Dec 11 08:30:15 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 11 08:30:15 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 11 08:30:15 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 11 08:30:15 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 11 08:30:15 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 11 08:30:15 localhost kernel: usbcore: registered new interface driver usbhid
Dec 11 08:30:15 localhost kernel: usbhid: USB HID core driver
Dec 11 08:30:15 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 11 08:30:15 localhost kernel: Initializing XFRM netlink socket
Dec 11 08:30:15 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 11 08:30:15 localhost kernel: Segment Routing with IPv6
Dec 11 08:30:15 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 11 08:30:15 localhost kernel: mpls_gso: MPLS GSO support
Dec 11 08:30:15 localhost kernel: IPI shorthand broadcast: enabled
Dec 11 08:30:15 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 11 08:30:15 localhost kernel: AES CTR mode by8 optimization enabled
Dec 11 08:30:15 localhost kernel: sched_clock: Marking stable (1181006027, 153991655)->(1462467273, -127469591)
Dec 11 08:30:15 localhost kernel: registered taskstats version 1
Dec 11 08:30:15 localhost kernel: Loading compiled-in X.509 certificates
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 11 08:30:15 localhost kernel: Demotion targets for Node 0: null
Dec 11 08:30:15 localhost kernel: page_owner is disabled
Dec 11 08:30:15 localhost kernel: Key type .fscrypt registered
Dec 11 08:30:15 localhost kernel: Key type fscrypt-provisioning registered
Dec 11 08:30:15 localhost kernel: Key type big_key registered
Dec 11 08:30:15 localhost kernel: Key type encrypted registered
Dec 11 08:30:15 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 11 08:30:15 localhost kernel: Loading compiled-in module X.509 certificates
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 11 08:30:15 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 11 08:30:15 localhost kernel: ima: No architecture policies found
Dec 11 08:30:15 localhost kernel: evm: Initialising EVM extended attributes:
Dec 11 08:30:15 localhost kernel: evm: security.selinux
Dec 11 08:30:15 localhost kernel: evm: security.SMACK64 (disabled)
Dec 11 08:30:15 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 11 08:30:15 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 11 08:30:15 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 11 08:30:15 localhost kernel: evm: security.apparmor (disabled)
Dec 11 08:30:15 localhost kernel: evm: security.ima
Dec 11 08:30:15 localhost kernel: evm: security.capability
Dec 11 08:30:15 localhost kernel: evm: HMAC attrs: 0x1
Dec 11 08:30:15 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 11 08:30:15 localhost kernel: Running certificate verification RSA selftest
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 11 08:30:15 localhost kernel: Running certificate verification ECDSA selftest
Dec 11 08:30:15 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 11 08:30:15 localhost kernel: clk: Disabling unused clocks
Dec 11 08:30:15 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 11 08:30:15 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 11 08:30:15 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 11 08:30:15 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 11 08:30:15 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 11 08:30:15 localhost kernel: Run /init as init process
Dec 11 08:30:15 localhost kernel:   with arguments:
Dec 11 08:30:15 localhost kernel:     /init
Dec 11 08:30:15 localhost kernel:   with environment:
Dec 11 08:30:15 localhost kernel:     HOME=/
Dec 11 08:30:15 localhost kernel:     TERM=linux
Dec 11 08:30:15 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 11 08:30:15 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 11 08:30:15 localhost systemd[1]: Detected virtualization kvm.
Dec 11 08:30:15 localhost systemd[1]: Detected architecture x86-64.
Dec 11 08:30:15 localhost systemd[1]: Running in initrd.
Dec 11 08:30:15 localhost systemd[1]: No hostname configured, using default hostname.
Dec 11 08:30:15 localhost systemd[1]: Hostname set to <localhost>.
Dec 11 08:30:15 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 11 08:30:15 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 11 08:30:15 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 11 08:30:15 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 11 08:30:15 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 11 08:30:15 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 11 08:30:15 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 11 08:30:15 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 11 08:30:15 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 11 08:30:15 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 11 08:30:15 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 11 08:30:15 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 11 08:30:15 localhost systemd[1]: Reached target Local File Systems.
Dec 11 08:30:15 localhost systemd[1]: Reached target Path Units.
Dec 11 08:30:15 localhost systemd[1]: Reached target Slice Units.
Dec 11 08:30:15 localhost systemd[1]: Reached target Swaps.
Dec 11 08:30:15 localhost systemd[1]: Reached target Timer Units.
Dec 11 08:30:15 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 11 08:30:15 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 11 08:30:15 localhost systemd[1]: Listening on Journal Socket.
Dec 11 08:30:15 localhost systemd[1]: Listening on udev Control Socket.
Dec 11 08:30:15 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 11 08:30:15 localhost systemd[1]: Reached target Socket Units.
Dec 11 08:30:15 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 11 08:30:15 localhost systemd[1]: Starting Journal Service...
Dec 11 08:30:15 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 11 08:30:15 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 11 08:30:15 localhost systemd[1]: Starting Create System Users...
Dec 11 08:30:15 localhost systemd[1]: Starting Setup Virtual Console...
Dec 11 08:30:15 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 11 08:30:15 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 11 08:30:15 localhost systemd[1]: Finished Create System Users.
Dec 11 08:30:15 localhost systemd-journald[304]: Journal started
Dec 11 08:30:15 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/290d097a37bb4fb79107fc4fe3107457) is 8.0M, max 153.6M, 145.6M free.
Dec 11 08:30:15 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Dec 11 08:30:15 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Dec 11 08:30:15 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 11 08:30:15 localhost systemd[1]: Started Journal Service.
Dec 11 08:30:15 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 11 08:30:15 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 11 08:30:15 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 11 08:30:15 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 11 08:30:15 localhost systemd[1]: Finished Setup Virtual Console.
Dec 11 08:30:15 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 11 08:30:15 localhost systemd[1]: Starting dracut cmdline hook...
Dec 11 08:30:15 localhost dracut-cmdline[322]: dracut-9 dracut-057-102.git20250818.el9
Dec 11 08:30:15 localhost dracut-cmdline[322]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 11 08:30:15 localhost systemd[1]: Finished dracut cmdline hook.
Dec 11 08:30:15 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 11 08:30:15 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 11 08:30:15 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 11 08:30:15 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 11 08:30:15 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 11 08:30:15 localhost kernel: RPC: Registered udp transport module.
Dec 11 08:30:15 localhost kernel: RPC: Registered tcp transport module.
Dec 11 08:30:15 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 11 08:30:15 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 11 08:30:15 localhost rpc.statd[440]: Version 2.5.4 starting
Dec 11 08:30:15 localhost rpc.statd[440]: Initializing NSM state
Dec 11 08:30:15 localhost rpc.idmapd[445]: Setting log level to 0
Dec 11 08:30:15 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 11 08:30:15 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 11 08:30:15 localhost systemd-udevd[458]: Using default interface naming scheme 'rhel-9.0'.
Dec 11 08:30:15 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 11 08:30:15 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 11 08:30:15 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 11 08:30:15 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 11 08:30:15 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 11 08:30:15 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 11 08:30:15 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 11 08:30:15 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 08:30:15 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 11 08:30:15 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 11 08:30:15 localhost systemd[1]: Reached target Network.
Dec 11 08:30:15 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 11 08:30:15 localhost systemd[1]: Starting dracut initqueue hook...
Dec 11 08:30:15 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 11 08:30:15 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 11 08:30:15 localhost kernel:  vda: vda1
Dec 11 08:30:15 localhost kernel: libata version 3.00 loaded.
Dec 11 08:30:15 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 11 08:30:15 localhost systemd-udevd[475]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:30:15 localhost kernel: scsi host0: ata_piix
Dec 11 08:30:15 localhost kernel: scsi host1: ata_piix
Dec 11 08:30:15 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 11 08:30:15 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 11 08:30:15 localhost systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 11 08:30:15 localhost systemd[1]: Reached target Initrd Root Device.
Dec 11 08:30:16 localhost kernel: ata1: found unknown device (class 0)
Dec 11 08:30:16 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 11 08:30:16 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 11 08:30:16 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 11 08:30:16 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 11 08:30:16 localhost systemd[1]: Reached target System Initialization.
Dec 11 08:30:16 localhost systemd[1]: Reached target Basic System.
Dec 11 08:30:16 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 11 08:30:16 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 11 08:30:16 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 11 08:30:16 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 11 08:30:16 localhost systemd[1]: Finished dracut initqueue hook.
Dec 11 08:30:16 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 11 08:30:16 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 11 08:30:16 localhost systemd[1]: Reached target Remote File Systems.
Dec 11 08:30:16 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 11 08:30:16 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 11 08:30:16 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 11 08:30:16 localhost systemd-fsck[551]: /usr/sbin/fsck.xfs: XFS file system.
Dec 11 08:30:16 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 11 08:30:16 localhost systemd[1]: Mounting /sysroot...
Dec 11 08:30:16 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 11 08:30:16 localhost kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 11 08:30:16 localhost kernel: XFS (vda1): Ending clean mount
Dec 11 08:30:16 localhost systemd[1]: Mounted /sysroot.
Dec 11 08:30:16 localhost systemd[1]: Reached target Initrd Root File System.
Dec 11 08:30:16 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 11 08:30:16 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 11 08:30:16 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 11 08:30:16 localhost systemd[1]: Reached target Initrd File Systems.
Dec 11 08:30:16 localhost systemd[1]: Reached target Initrd Default Target.
Dec 11 08:30:16 localhost systemd[1]: Starting dracut mount hook...
Dec 11 08:30:16 localhost systemd[1]: Finished dracut mount hook.
Dec 11 08:30:16 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 11 08:30:16 localhost rpc.idmapd[445]: exiting on signal 15
Dec 11 08:30:17 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 11 08:30:17 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 11 08:30:17 localhost systemd[1]: Stopped target Network.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Timer Units.
Dec 11 08:30:17 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 11 08:30:17 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Basic System.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Path Units.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Remote File Systems.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Slice Units.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Socket Units.
Dec 11 08:30:17 localhost systemd[1]: Stopped target System Initialization.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Local File Systems.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Swaps.
Dec 11 08:30:17 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped dracut mount hook.
Dec 11 08:30:17 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 11 08:30:17 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 11 08:30:17 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 11 08:30:17 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 11 08:30:17 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 11 08:30:17 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 11 08:30:17 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 11 08:30:17 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 11 08:30:17 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 11 08:30:17 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 11 08:30:17 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 11 08:30:17 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Closed udev Control Socket.
Dec 11 08:30:17 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Closed udev Kernel Socket.
Dec 11 08:30:17 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 11 08:30:17 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 11 08:30:17 localhost systemd[1]: Starting Cleanup udev Database...
Dec 11 08:30:17 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 11 08:30:17 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 11 08:30:17 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Create System Users.
Dec 11 08:30:17 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Finished Cleanup udev Database.
Dec 11 08:30:17 localhost systemd[1]: Reached target Switch Root.
Dec 11 08:30:17 localhost systemd[1]: Starting Switch Root...
Dec 11 08:30:17 localhost systemd[1]: Switching root.
Dec 11 08:30:17 localhost systemd-journald[304]: Journal stopped
Dec 11 08:30:17 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Dec 11 08:30:17 localhost kernel: audit: type=1404 audit(1765441817.218:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 11 08:30:17 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:30:17 localhost kernel: SELinux:  policy capability open_perms=1
Dec 11 08:30:17 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:30:17 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:30:17 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:30:17 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:30:17 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:30:17 localhost kernel: audit: type=1403 audit(1765441817.342:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 11 08:30:17 localhost systemd[1]: Successfully loaded SELinux policy in 128.075ms.
Dec 11 08:30:17 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.280ms.
Dec 11 08:30:17 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 11 08:30:17 localhost systemd[1]: Detected virtualization kvm.
Dec 11 08:30:17 localhost systemd[1]: Detected architecture x86-64.
Dec 11 08:30:17 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:30:17 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped Switch Root.
Dec 11 08:30:17 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 11 08:30:17 localhost systemd[1]: Created slice Slice /system/getty.
Dec 11 08:30:17 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 11 08:30:17 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 11 08:30:17 localhost systemd[1]: Created slice User and Session Slice.
Dec 11 08:30:17 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 11 08:30:17 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 11 08:30:17 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 11 08:30:17 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Switch Root.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 11 08:30:17 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 11 08:30:17 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 11 08:30:17 localhost systemd[1]: Reached target Path Units.
Dec 11 08:30:17 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 11 08:30:17 localhost systemd[1]: Reached target Slice Units.
Dec 11 08:30:17 localhost systemd[1]: Reached target Swaps.
Dec 11 08:30:17 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 11 08:30:17 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 11 08:30:17 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 11 08:30:17 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 11 08:30:17 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 11 08:30:17 localhost systemd[1]: Listening on udev Control Socket.
Dec 11 08:30:17 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 11 08:30:17 localhost systemd[1]: Mounting Huge Pages File System...
Dec 11 08:30:17 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 11 08:30:17 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 11 08:30:17 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 11 08:30:17 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 11 08:30:17 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 11 08:30:17 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 11 08:30:17 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 11 08:30:17 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 11 08:30:17 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 11 08:30:17 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 11 08:30:17 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 11 08:30:17 localhost systemd[1]: Stopped Journal Service.
Dec 11 08:30:17 localhost kernel: fuse: init (API version 7.37)
Dec 11 08:30:17 localhost systemd[1]: Starting Journal Service...
Dec 11 08:30:17 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 11 08:30:17 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 11 08:30:17 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 08:30:17 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 11 08:30:17 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 11 08:30:17 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 11 08:30:17 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 11 08:30:17 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 11 08:30:17 localhost systemd-journald[676]: Journal started
Dec 11 08:30:17 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 11 08:30:17 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 11 08:30:17 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Started Journal Service.
Dec 11 08:30:17 localhost systemd[1]: Mounted Huge Pages File System.
Dec 11 08:30:17 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 11 08:30:17 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 11 08:30:17 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 11 08:30:17 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 11 08:30:17 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 11 08:30:17 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 11 08:30:17 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 11 08:30:17 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 11 08:30:17 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 11 08:30:17 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 11 08:30:17 localhost kernel: ACPI: bus type drm_connector registered
Dec 11 08:30:17 localhost systemd[1]: Mounting FUSE Control File System...
Dec 11 08:30:17 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 11 08:30:17 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 11 08:30:17 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 11 08:30:17 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 11 08:30:17 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 11 08:30:17 localhost systemd[1]: Starting Create System Users...
Dec 11 08:30:17 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 11 08:30:17 localhost systemd-journald[676]: Received client request to flush runtime journal.
Dec 11 08:30:17 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 11 08:30:17 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 11 08:30:17 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 11 08:30:17 localhost systemd[1]: Mounted FUSE Control File System.
Dec 11 08:30:17 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 11 08:30:17 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 11 08:30:17 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 11 08:30:17 localhost systemd[1]: Finished Create System Users.
Dec 11 08:30:17 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 11 08:30:17 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 11 08:30:17 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 11 08:30:17 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 11 08:30:17 localhost systemd[1]: Reached target Local File Systems.
Dec 11 08:30:17 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 11 08:30:17 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 11 08:30:17 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 11 08:30:17 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 11 08:30:17 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 11 08:30:17 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 11 08:30:17 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 11 08:30:17 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Dec 11 08:30:17 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 11 08:30:17 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 11 08:30:18 localhost systemd[1]: Starting Security Auditing Service...
Dec 11 08:30:18 localhost systemd[1]: Starting RPC Bind...
Dec 11 08:30:18 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 11 08:30:18 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 11 08:30:18 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 11 08:30:18 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 11 08:30:18 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 11 08:30:18 localhost systemd[1]: Started RPC Bind.
Dec 11 08:30:18 localhost augenrules[705]: /sbin/augenrules: No change
Dec 11 08:30:18 localhost augenrules[720]: No rules
Dec 11 08:30:18 localhost augenrules[720]: enabled 1
Dec 11 08:30:18 localhost augenrules[720]: failure 1
Dec 11 08:30:18 localhost augenrules[720]: pid 700
Dec 11 08:30:18 localhost augenrules[720]: rate_limit 0
Dec 11 08:30:18 localhost augenrules[720]: backlog_limit 8192
Dec 11 08:30:18 localhost augenrules[720]: lost 0
Dec 11 08:30:18 localhost augenrules[720]: backlog 0
Dec 11 08:30:18 localhost augenrules[720]: backlog_wait_time 60000
Dec 11 08:30:18 localhost augenrules[720]: backlog_wait_time_actual 0
Dec 11 08:30:18 localhost augenrules[720]: enabled 1
Dec 11 08:30:18 localhost augenrules[720]: failure 1
Dec 11 08:30:18 localhost augenrules[720]: pid 700
Dec 11 08:30:18 localhost augenrules[720]: rate_limit 0
Dec 11 08:30:18 localhost augenrules[720]: backlog_limit 8192
Dec 11 08:30:18 localhost augenrules[720]: lost 0
Dec 11 08:30:18 localhost augenrules[720]: backlog 4
Dec 11 08:30:18 localhost augenrules[720]: backlog_wait_time 60000
Dec 11 08:30:18 localhost augenrules[720]: backlog_wait_time_actual 0
Dec 11 08:30:18 localhost augenrules[720]: enabled 1
Dec 11 08:30:18 localhost augenrules[720]: failure 1
Dec 11 08:30:18 localhost augenrules[720]: pid 700
Dec 11 08:30:18 localhost augenrules[720]: rate_limit 0
Dec 11 08:30:18 localhost augenrules[720]: backlog_limit 8192
Dec 11 08:30:18 localhost augenrules[720]: lost 0
Dec 11 08:30:18 localhost augenrules[720]: backlog 4
Dec 11 08:30:18 localhost augenrules[720]: backlog_wait_time 60000
Dec 11 08:30:18 localhost augenrules[720]: backlog_wait_time_actual 0
Dec 11 08:30:18 localhost systemd[1]: Started Security Auditing Service.
Dec 11 08:30:18 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 11 08:30:18 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 11 08:30:18 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 11 08:30:18 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 11 08:30:18 localhost systemd[1]: Starting Update is Completed...
Dec 11 08:30:18 localhost systemd[1]: Finished Update is Completed.
Dec 11 08:30:18 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec 11 08:30:18 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 11 08:30:18 localhost systemd[1]: Reached target System Initialization.
Dec 11 08:30:18 localhost systemd[1]: Started dnf makecache --timer.
Dec 11 08:30:18 localhost systemd[1]: Started Daily rotation of log files.
Dec 11 08:30:18 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 11 08:30:18 localhost systemd[1]: Reached target Timer Units.
Dec 11 08:30:18 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 11 08:30:18 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 11 08:30:18 localhost systemd[1]: Reached target Socket Units.
Dec 11 08:30:18 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 11 08:30:18 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 08:30:18 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 11 08:30:18 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 11 08:30:18 localhost systemd-udevd[734]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:30:18 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 11 08:30:18 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 11 08:30:18 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 11 08:30:18 localhost systemd[1]: Reached target Basic System.
Dec 11 08:30:18 localhost dbus-broker-lau[746]: Ready
Dec 11 08:30:18 localhost systemd[1]: Starting NTP client/server...
Dec 11 08:30:18 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 11 08:30:18 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 11 08:30:18 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 11 08:30:18 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 11 08:30:18 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 11 08:30:18 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 11 08:30:18 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 11 08:30:18 localhost systemd[1]: Started irqbalance daemon.
Dec 11 08:30:18 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 11 08:30:18 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:30:18 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:30:18 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 11 08:30:18 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 11 08:30:18 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 11 08:30:18 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 11 08:30:18 localhost chronyd[787]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 11 08:30:18 localhost chronyd[787]: Loaded 0 symmetric keys
Dec 11 08:30:18 localhost chronyd[787]: Using right/UTC timezone to obtain leap second data
Dec 11 08:30:18 localhost chronyd[787]: Loaded seccomp filter (level 2)
Dec 11 08:30:18 localhost systemd[1]: Starting User Login Management...
Dec 11 08:30:18 localhost systemd[1]: Started NTP client/server.
Dec 11 08:30:18 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 11 08:30:18 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 11 08:30:18 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 11 08:30:18 localhost kernel: Console: switching to colour dummy device 80x25
Dec 11 08:30:18 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 11 08:30:18 localhost kernel: [drm] features: -context_init
Dec 11 08:30:18 localhost kernel: [drm] number of scanouts: 1
Dec 11 08:30:18 localhost kernel: [drm] number of cap sets: 0
Dec 11 08:30:18 localhost systemd-logind[791]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 11 08:30:18 localhost systemd-logind[791]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 11 08:30:18 localhost systemd-logind[791]: New seat seat0.
Dec 11 08:30:18 localhost kernel: kvm_amd: TSC scaling supported
Dec 11 08:30:18 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 11 08:30:18 localhost kernel: kvm_amd: Nested Paging enabled
Dec 11 08:30:18 localhost kernel: kvm_amd: LBR virtualization supported
Dec 11 08:30:18 localhost systemd[1]: Started User Login Management.
Dec 11 08:30:18 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 11 08:30:18 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 11 08:30:18 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 11 08:30:18 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 11 08:30:18 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 11 08:30:18 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 11 08:30:18 localhost iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Dec 11 08:30:18 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 11 08:30:19 localhost cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 11 Dec 2025 08:30:19 +0000. Up 5.65 seconds.
Dec 11 08:30:19 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 11 08:30:19 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 11 08:30:19 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpayehi72s.mount: Deactivated successfully.
Dec 11 08:30:19 localhost systemd[1]: Starting Hostname Service...
Dec 11 08:30:19 localhost systemd[1]: Started Hostname Service.
Dec 11 08:30:19 np0005555078.novalocal systemd-hostnamed[852]: Hostname set to <np0005555078.novalocal> (static)
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Reached target Preparation for Network.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Starting Network Manager...
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4696] NetworkManager (version 1.54.2-1.el9) is starting... (boot:32e43615-624a-4aad-9e4a-02351ae2816f)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4700] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4759] manager[0x563421c06000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4793] hostname: hostname: using hostnamed
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4793] hostname: static hostname changed from (none) to "np0005555078.novalocal"
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4797] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4880] manager[0x563421c06000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4881] manager[0x563421c06000]: rfkill: WWAN hardware radio set enabled
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4915] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4915] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4915] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4916] manager: Networking is enabled by state file
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4918] settings: Loaded settings plugin: keyfile (internal)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4934] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4949] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4959] dhcp: init: Using DHCP client 'internal'
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4961] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4972] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4978] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4983] device (lo): Activation: starting connection 'lo' (2718cfd2-2948-42fc-8bc2-9c6575230f6f)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4990] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.4993] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5021] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5025] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5027] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5029] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5031] device (eth0): carrier: link connected
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5034] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5040] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5045] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5048] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5049] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5052] manager: NetworkManager state is now CONNECTING
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5053] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5059] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5062] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Started Network Manager.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Reached target Network.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5213] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5215] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 08:30:19 np0005555078.novalocal NetworkManager[856]: <info>  [1765441819.5221] device (lo): Activation: successful, device activated.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Reached target NFS client services.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: Reached target Remote File Systems.
Dec 11 08:30:19 np0005555078.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8355] dhcp4 (eth0): state changed new lease, address=38.102.83.2
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8365] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8385] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8401] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8403] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8406] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8409] device (eth0): Activation: successful, device activated.
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8414] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 08:30:21 np0005555078.novalocal NetworkManager[856]: <info>  [1765441821.8423] manager: startup complete
Dec 11 08:30:21 np0005555078.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 11 08:30:21 np0005555078.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 11 Dec 2025 08:30:22 +0000. Up 8.76 seconds.
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |  eth0  | True |         38.102.83.2          | 255.255.255.0 | global | fa:16:3e:fe:f4:00 |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fefe:f400/64 |       .       |  link  | fa:16:3e:fe:f4:00 |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec 11 08:30:22 np0005555078.novalocal cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 11 08:30:22 np0005555078.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Dec 11 08:30:22 np0005555078.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 11 08:30:22 np0005555078.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Dec 11 08:30:22 np0005555078.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Dec 11 08:30:22 np0005555078.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Dec 11 08:30:22 np0005555078.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Generating public/private rsa key pair.
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: The key fingerprint is:
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: SHA256:xQDcKHncuPVl5kteFGovx5Mos+n3eYN76aUNFCzi2E4 root@np0005555078.novalocal
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: The key's randomart image is:
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: +---[RSA 3072]----+
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |     +.*.     .. |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |    o * +o  +o.  |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |     o o .+=+.o  |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |      .  =.oo=.o |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |        S EoooB  |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |         o =o+ . |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |          +   o o|
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |         .  .. Oo|
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |          .. oBoo|
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: +----[SHA256]-----+
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Generating public/private ecdsa key pair.
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: The key fingerprint is:
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: SHA256:eaYgXyVia9OFMTeFJxqw03S1+DuaCW7M3xD7Q7kYUO4 root@np0005555078.novalocal
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: The key's randomart image is:
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: +---[ECDSA 256]---+
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |      ..+ ++o    |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |       +.*+o..   |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |      = +=+o.    |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |     . =o=..     |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |    . = So+ ..   |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |     + + +Eoo.   |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |      .oo o+o.   |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |       .+..Bo.   |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |       ...= o.   |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: +----[SHA256]-----+
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Generating public/private ed25519 key pair.
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: The key fingerprint is:
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: SHA256:eaYqEO/hGeZNB+mb3PgQbwJOlkT67pk0qnJkO/3jUqk root@np0005555078.novalocal
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: The key's randomart image is:
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: +--[ED25519 256]--+
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |    .            |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |   o             |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |  . .  .         |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |  .o .o  .       |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |   o*..oS o      |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |  +==.=o.+       |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: | o O=Xo*+        |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |. ++E+B=.        |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: |oo..+=+o.        |
Dec 11 08:30:23 np0005555078.novalocal cloud-init[918]: +----[SHA256]-----+
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Reached target Network is Online.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting System Logging Service...
Dec 11 08:30:23 np0005555078.novalocal sm-notify[1002]: Version 2.5.4 starting
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting Permit User Sessions...
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 11 08:30:23 np0005555078.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Dec 11 08:30:23 np0005555078.novalocal sshd[1004]: Server listening on :: port 22.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Finished Permit User Sessions.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Started Command Scheduler.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Started Getty on tty1.
Dec 11 08:30:23 np0005555078.novalocal crond[1007]: (CRON) STARTUP (1.5.7)
Dec 11 08:30:23 np0005555078.novalocal crond[1007]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 11 08:30:23 np0005555078.novalocal crond[1007]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 71% if used.)
Dec 11 08:30:23 np0005555078.novalocal crond[1007]: (CRON) INFO (running with inotify support)
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 11 08:30:23 np0005555078.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Dec 11 08:30:23 np0005555078.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Reached target Login Prompts.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Started System Logging Service.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Reached target Multi-User System.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 11 08:30:23 np0005555078.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 08:30:23 np0005555078.novalocal kdumpctl[1016]: kdump: No kdump initial ramdisk found.
Dec 11 08:30:23 np0005555078.novalocal kdumpctl[1016]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 11 08:30:23 np0005555078.novalocal cloud-init[1162]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 11 Dec 2025 08:30:23 +0000. Up 10.21 seconds.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 11 08:30:23 np0005555078.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 11 08:30:23 np0005555078.novalocal dracut[1263]: dracut-057-102.git20250818.el9
Dec 11 08:30:23 np0005555078.novalocal dracut[1265]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 11 08:30:23 np0005555078.novalocal sshd-session[1292]: Connection reset by 38.102.83.114 port 51632 [preauth]
Dec 11 08:30:23 np0005555078.novalocal sshd-session[1310]: Unable to negotiate with 38.102.83.114 port 51646: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 11 08:30:23 np0005555078.novalocal cloud-init[1339]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 11 Dec 2025 08:30:23 +0000. Up 10.59 seconds.
Dec 11 08:30:24 np0005555078.novalocal sshd-session[1326]: Connection reset by 38.102.83.114 port 51650 [preauth]
Dec 11 08:30:24 np0005555078.novalocal sshd-session[1346]: Unable to negotiate with 38.102.83.114 port 51654: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1352]: #############################################################
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1354]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1360]: 256 SHA256:eaYgXyVia9OFMTeFJxqw03S1+DuaCW7M3xD7Q7kYUO4 root@np0005555078.novalocal (ECDSA)
Dec 11 08:30:24 np0005555078.novalocal sshd-session[1353]: Unable to negotiate with 38.102.83.114 port 51670: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1364]: 256 SHA256:eaYqEO/hGeZNB+mb3PgQbwJOlkT67pk0qnJkO/3jUqk root@np0005555078.novalocal (ED25519)
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1369]: 3072 SHA256:xQDcKHncuPVl5kteFGovx5Mos+n3eYN76aUNFCzi2E4 root@np0005555078.novalocal (RSA)
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1370]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1371]: #############################################################
Dec 11 08:30:24 np0005555078.novalocal sshd-session[1362]: Connection closed by 38.102.83.114 port 51680 [preauth]
Dec 11 08:30:24 np0005555078.novalocal cloud-init[1339]: Cloud-init v. 24.4-7.el9 finished at Thu, 11 Dec 2025 08:30:24 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.75 seconds
Dec 11 08:30:24 np0005555078.novalocal sshd-session[1384]: Connection reset by 38.102.83.114 port 51690 [preauth]
Dec 11 08:30:24 np0005555078.novalocal sshd-session[1389]: Unable to negotiate with 38.102.83.114 port 51698: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 11 08:30:24 np0005555078.novalocal sshd-session[1396]: Unable to negotiate with 38.102.83.114 port 51702: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 11 08:30:24 np0005555078.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 11 08:30:24 np0005555078.novalocal systemd[1]: Reached target Cloud-init target.
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: memstrack is not available
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 11 08:30:24 np0005555078.novalocal dracut[1265]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: memstrack is not available
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: *** Including module: systemd ***
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: *** Including module: fips ***
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: *** Including module: systemd-initrd ***
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: *** Including module: i18n ***
Dec 11 08:30:25 np0005555078.novalocal dracut[1265]: *** Including module: drm ***
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: prefixdevname ***
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: kernel-modules ***
Dec 11 08:30:26 np0005555078.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: kernel-modules-extra ***
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: qemu ***
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: fstab-sys ***
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: rootfs-block ***
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: terminfo ***
Dec 11 08:30:26 np0005555078.novalocal dracut[1265]: *** Including module: udev-rules ***
Dec 11 08:30:26 np0005555078.novalocal chronyd[787]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Dec 11 08:30:26 np0005555078.novalocal chronyd[787]: System clock TAI offset set to 37 seconds
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: Skipping udev rule: 91-permissions.rules
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: *** Including module: virtiofs ***
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: *** Including module: dracut-systemd ***
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: *** Including module: usrmount ***
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: *** Including module: base ***
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: *** Including module: fs-lib ***
Dec 11 08:30:27 np0005555078.novalocal dracut[1265]: *** Including module: kdumpbase ***
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:   microcode_ctl module: mangling fw_dir
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel" is ignored
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 11 08:30:28 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]: *** Including module: openssl ***
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]: *** Including module: shutdown ***
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]: *** Including module: squash ***
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]: *** Including modules done ***
Dec 11 08:30:29 np0005555078.novalocal dracut[1265]: *** Installing kernel module dependencies ***
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: IRQ 25 affinity is now unmanaged
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: IRQ 31 affinity is now unmanaged
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: IRQ 28 affinity is now unmanaged
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: IRQ 32 affinity is now unmanaged
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: IRQ 30 affinity is now unmanaged
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 11 08:30:29 np0005555078.novalocal irqbalance[783]: IRQ 29 affinity is now unmanaged
Dec 11 08:30:30 np0005555078.novalocal dracut[1265]: *** Installing kernel module dependencies done ***
Dec 11 08:30:30 np0005555078.novalocal dracut[1265]: *** Resolving executable dependencies ***
Dec 11 08:30:31 np0005555078.novalocal dracut[1265]: *** Resolving executable dependencies done ***
Dec 11 08:30:31 np0005555078.novalocal dracut[1265]: *** Generating early-microcode cpio image ***
Dec 11 08:30:31 np0005555078.novalocal dracut[1265]: *** Store current command line parameters ***
Dec 11 08:30:31 np0005555078.novalocal dracut[1265]: Stored kernel commandline:
Dec 11 08:30:31 np0005555078.novalocal dracut[1265]: No dracut internal kernel commandline stored in the initramfs
Dec 11 08:30:31 np0005555078.novalocal dracut[1265]: *** Install squash loader ***
Dec 11 08:30:31 np0005555078.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:30:32 np0005555078.novalocal dracut[1265]: *** Squashing the files inside the initramfs ***
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: *** Squashing the files inside the initramfs done ***
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: *** Hardlinking files ***
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: Mode:           real
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: Files:          50
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: Linked:         0 files
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: Compared:       0 xattrs
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: Compared:       0 files
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: Saved:          0 B
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: Duration:       0.000536 seconds
Dec 11 08:30:33 np0005555078.novalocal dracut[1265]: *** Hardlinking files done ***
Dec 11 08:30:34 np0005555078.novalocal dracut[1265]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 11 08:30:34 np0005555078.novalocal kdumpctl[1016]: kdump: kexec: loaded kdump kernel
Dec 11 08:30:34 np0005555078.novalocal kdumpctl[1016]: kdump: Starting kdump: [OK]
Dec 11 08:30:34 np0005555078.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 11 08:30:34 np0005555078.novalocal systemd[1]: Startup finished in 1.531s (kernel) + 2.318s (initrd) + 17.272s (userspace) = 21.121s.
Dec 11 08:30:49 np0005555078.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:31:15 np0005555078.novalocal sshd-session[4294]: Accepted publickey for zuul from 38.102.83.114 port 35076 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 11 08:31:15 np0005555078.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 11 08:31:15 np0005555078.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 11 08:31:15 np0005555078.novalocal systemd-logind[791]: New session 1 of user zuul.
Dec 11 08:31:15 np0005555078.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 11 08:31:15 np0005555078.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Queued start job for default target Main User Target.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Created slice User Application Slice.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Reached target Paths.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Reached target Timers.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Starting D-Bus User Message Bus Socket...
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Starting Create User's Volatile Files and Directories...
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Listening on D-Bus User Message Bus Socket.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Reached target Sockets.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Finished Create User's Volatile Files and Directories.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Reached target Basic System.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Reached target Main User Target.
Dec 11 08:31:15 np0005555078.novalocal systemd[4298]: Startup finished in 134ms.
Dec 11 08:31:15 np0005555078.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 11 08:31:15 np0005555078.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 11 08:31:15 np0005555078.novalocal sshd-session[4294]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:31:16 np0005555078.novalocal python3[4380]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:20 np0005555078.novalocal python3[4408]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:27 np0005555078.novalocal python3[4466]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:28 np0005555078.novalocal python3[4506]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 11 08:31:30 np0005555078.novalocal python3[4532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnt2InnHq6tApNig+P5WVoMHw1rlk7UfQdxOhvkyjN645QP7rAPOf+kiZ5vlE1JQdo2PD+c1o83wN4ZpjrJ6P2pHioYrGxNq//bkYcu2OvWWyKacU3XnXkr8D8sgH4mTPrVOFvx0VXPUA5NRbxgeuG5zwJU0pKdPqTFe1Eiyse5nHVWbaLfedSmapHiMrI0jnu0lQTlS7AclHMTRd01iU0vWBay/eZzB7grlUZKUEiMsOjSoWhhTnihf2M/5DM+vrD1mWyMLO+HeWe7Vrwl9JZuj8wWTA3IEK1/dSSboiR2+A5kMPqwDsrDNkrqvaew7lF6rIHRymiOvEtwK7700U6S+tK8EExFTNxrZXxDwvZLYdHVWCxIRNRxS5AxhPBsEqkhKqpmFjffm7AyhHZ2j3rxSir3TmxwVk0QLd2RPG3ypPAWYlz/rfVjwZQwuLY8pqmsnUKb7Lo9hln/NrQfRR5UDbY/j6nzSZyFwgd7KjdHB1Ld9Z/N3unxqaho2c81Zs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:31 np0005555078.novalocal python3[4556]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:31 np0005555078.novalocal python3[4655]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:31 np0005555078.novalocal python3[4726]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765441891.3330798-252-254353615627778/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=aa49124d53fb4dc8bbe02719772364ce_id_rsa follow=False checksum=920384a381621a9eadb407e248e38edf0c531f3e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:32 np0005555078.novalocal python3[4849]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:32 np0005555078.novalocal python3[4920]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765441892.2973213-307-158526806745954/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=aa49124d53fb4dc8bbe02719772364ce_id_rsa.pub follow=False checksum=10cf1c2e97c3e098002d40625d0547a51192a872 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:33 np0005555078.novalocal chronyd[787]: Selected source 174.142.148.226 (2.centos.pool.ntp.org)
Dec 11 08:31:34 np0005555078.novalocal python3[4968]: ansible-ping Invoked with data=pong
Dec 11 08:31:35 np0005555078.novalocal python3[4992]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:31:38 np0005555078.novalocal python3[5050]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 11 08:31:39 np0005555078.novalocal python3[5082]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:39 np0005555078.novalocal python3[5106]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:39 np0005555078.novalocal python3[5130]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:40 np0005555078.novalocal python3[5154]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:40 np0005555078.novalocal python3[5178]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:40 np0005555078.novalocal python3[5202]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:42 np0005555078.novalocal sudo[5226]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwjbvltzecgjsumnexauugheejzfnoc ; /usr/bin/python3'
Dec 11 08:31:42 np0005555078.novalocal sudo[5226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:42 np0005555078.novalocal python3[5228]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:42 np0005555078.novalocal sudo[5226]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:43 np0005555078.novalocal sudo[5304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrlqfblpreoiepwsjtwtzbpaaklthfqi ; /usr/bin/python3'
Dec 11 08:31:43 np0005555078.novalocal sudo[5304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:43 np0005555078.novalocal python3[5306]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:43 np0005555078.novalocal sudo[5304]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:43 np0005555078.novalocal sudo[5377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfvayvvriamltpjyfcbqoscqgiyiioky ; /usr/bin/python3'
Dec 11 08:31:43 np0005555078.novalocal sudo[5377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:44 np0005555078.novalocal python3[5379]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765441902.981936-33-121674104095623/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:44 np0005555078.novalocal sudo[5377]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:44 np0005555078.novalocal python3[5427]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:44 np0005555078.novalocal python3[5451]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:45 np0005555078.novalocal python3[5475]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:45 np0005555078.novalocal python3[5499]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:45 np0005555078.novalocal python3[5523]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:46 np0005555078.novalocal python3[5547]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:46 np0005555078.novalocal python3[5571]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:46 np0005555078.novalocal python3[5595]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555078.novalocal python3[5619]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555078.novalocal python3[5643]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555078.novalocal python3[5667]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:47 np0005555078.novalocal python3[5691]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:48 np0005555078.novalocal python3[5715]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:48 np0005555078.novalocal python3[5739]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:48 np0005555078.novalocal python3[5763]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:48 np0005555078.novalocal python3[5787]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:49 np0005555078.novalocal python3[5811]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:49 np0005555078.novalocal python3[5835]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:49 np0005555078.novalocal python3[5859]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555078.novalocal python3[5883]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555078.novalocal python3[5907]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555078.novalocal python3[5931]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:50 np0005555078.novalocal python3[5955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:51 np0005555078.novalocal python3[5979]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:51 np0005555078.novalocal python3[6003]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:51 np0005555078.novalocal python3[6027]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:31:55 np0005555078.novalocal sudo[6051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahnyrpfoslmyfnzfvgmjawzyorietwfo ; /usr/bin/python3'
Dec 11 08:31:55 np0005555078.novalocal sudo[6051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:55 np0005555078.novalocal python3[6053]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 11 08:31:55 np0005555078.novalocal systemd[1]: Starting Time & Date Service...
Dec 11 08:31:55 np0005555078.novalocal systemd[1]: Started Time & Date Service.
Dec 11 08:31:55 np0005555078.novalocal systemd-timedated[6055]: Changed time zone to 'UTC' (UTC).
Dec 11 08:31:55 np0005555078.novalocal sudo[6051]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:55 np0005555078.novalocal sudo[6082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsxqwgphvljvoxflbaiytwyuuyyvnscb ; /usr/bin/python3'
Dec 11 08:31:55 np0005555078.novalocal sudo[6082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:55 np0005555078.novalocal python3[6084]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:55 np0005555078.novalocal sudo[6082]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:56 np0005555078.novalocal python3[6160]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:56 np0005555078.novalocal python3[6231]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765441916.1310298-252-55047579263835/source _original_basename=tmppouumddn follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:57 np0005555078.novalocal python3[6331]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:57 np0005555078.novalocal python3[6402]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765441917.0719314-302-240096754271798/source _original_basename=tmppc1dsa7f follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:58 np0005555078.novalocal sudo[6502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywjcypavnkhwlprjrgpbixrabduqgyab ; /usr/bin/python3'
Dec 11 08:31:58 np0005555078.novalocal sudo[6502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:58 np0005555078.novalocal python3[6504]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:31:58 np0005555078.novalocal sudo[6502]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:58 np0005555078.novalocal sudo[6575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeiznqnwanfrjvyvjbmfuweqmqbnorcz ; /usr/bin/python3'
Dec 11 08:31:58 np0005555078.novalocal sudo[6575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:31:58 np0005555078.novalocal python3[6577]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765441918.3305042-382-95751322375703/source _original_basename=tmpea6cii6_ follow=False checksum=3919d89d51143d2a8a3d8431c73f2c48622cad93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:31:58 np0005555078.novalocal sudo[6575]: pam_unix(sudo:session): session closed for user root
Dec 11 08:31:59 np0005555078.novalocal python3[6625]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:31:59 np0005555078.novalocal python3[6651]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:32:00 np0005555078.novalocal sudo[6729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sysmksibqeptiyelxktisexsxljffhqi ; /usr/bin/python3'
Dec 11 08:32:00 np0005555078.novalocal sudo[6729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:00 np0005555078.novalocal python3[6731]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:32:00 np0005555078.novalocal sudo[6729]: pam_unix(sudo:session): session closed for user root
Dec 11 08:32:00 np0005555078.novalocal sudo[6802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zayjicmpfmxsjvqkxqgnikjdlqxryjqi ; /usr/bin/python3'
Dec 11 08:32:00 np0005555078.novalocal sudo[6802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:00 np0005555078.novalocal python3[6804]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765441920.0760727-453-32834186874053/source _original_basename=tmpgmae1x70 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:00 np0005555078.novalocal sudo[6802]: pam_unix(sudo:session): session closed for user root
Dec 11 08:32:01 np0005555078.novalocal sudo[6853]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlhafywpsuczqxxufmtjkcjdpupebuwh ; /usr/bin/python3'
Dec 11 08:32:01 np0005555078.novalocal sudo[6853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:01 np0005555078.novalocal python3[6855]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-a534-11ee-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:32:01 np0005555078.novalocal sudo[6853]: pam_unix(sudo:session): session closed for user root
Dec 11 08:32:02 np0005555078.novalocal python3[6883]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-a534-11ee-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 11 08:32:03 np0005555078.novalocal python3[6912]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:25 np0005555078.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 11 08:32:40 np0005555078.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzhdqvhtnxfnppqbhdrwjwklyfdudjk ; /usr/bin/python3'
Dec 11 08:32:40 np0005555078.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:32:40 np0005555078.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:32:40 np0005555078.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Dec 11 08:33:33 np0005555078.novalocal systemd[4298]: Starting Mark boot as successful...
Dec 11 08:33:33 np0005555078.novalocal systemd[4298]: Finished Mark boot as successful.
Dec 11 08:33:40 np0005555078.novalocal sshd-session[4307]: Received disconnect from 38.102.83.114 port 35076:11: disconnected by user
Dec 11 08:33:40 np0005555078.novalocal sshd-session[4307]: Disconnected from user zuul 38.102.83.114 port 35076
Dec 11 08:33:40 np0005555078.novalocal sshd-session[4294]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:33:40 np0005555078.novalocal systemd-logind[791]: Session 1 logged out. Waiting for processes to exit.
Dec 11 08:33:43 np0005555078.novalocal chronyd[787]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 11 08:33:54 np0005555078.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 11 08:33:54 np0005555078.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7525] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 08:33:54 np0005555078.novalocal systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7693] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7728] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7734] device (eth1): carrier: link connected
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7738] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7748] policy: auto-activating connection 'Wired connection 1' (c8dc28c4-ba83-33d4-a5e8-63663506fb05)
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7753] device (eth1): Activation: starting connection 'Wired connection 1' (c8dc28c4-ba83-33d4-a5e8-63663506fb05)
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7755] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7761] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7768] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:33:54 np0005555078.novalocal NetworkManager[856]: <info>  [1765442034.7776] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:33:56 np0005555078.novalocal sshd-session[6946]: Accepted publickey for zuul from 38.102.83.114 port 39498 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:33:56 np0005555078.novalocal systemd-logind[791]: New session 3 of user zuul.
Dec 11 08:33:56 np0005555078.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 11 08:33:56 np0005555078.novalocal sshd-session[6946]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:33:56 np0005555078.novalocal python3[6973]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-d2a4-3bf1-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:34:06 np0005555078.novalocal sudo[7051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkaoemslvzaewozfwcbdezzxxtovdldf ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:34:06 np0005555078.novalocal sudo[7051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:34:06 np0005555078.novalocal python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:34:06 np0005555078.novalocal sudo[7051]: pam_unix(sudo:session): session closed for user root
Dec 11 08:34:06 np0005555078.novalocal sudo[7124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hamyutqywqwecqizwhcopbkeykfczncq ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:34:06 np0005555078.novalocal sudo[7124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:34:06 np0005555078.novalocal python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765442046.3351462-155-149025501611225/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2648390f247f594aec3ed729a4121ee60cfbc297 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:34:06 np0005555078.novalocal sudo[7124]: pam_unix(sudo:session): session closed for user root
Dec 11 08:34:07 np0005555078.novalocal sudo[7174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcmwwttaeesdsmedshabznlktpjfmngi ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:34:07 np0005555078.novalocal sudo[7174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:34:07 np0005555078.novalocal python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Stopping Network Manager...
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5322] caught SIGTERM, shutting down normally.
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5340] dhcp4 (eth0): canceled DHCP transaction
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5341] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5341] dhcp4 (eth0): state changed no lease
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5345] manager: NetworkManager state is now CONNECTING
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5423] dhcp4 (eth1): canceled DHCP transaction
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5424] dhcp4 (eth1): state changed no lease
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[856]: <info>  [1765442047.5515] exiting (success)
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Stopped Network Manager.
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: NetworkManager.service: Consumed 1.483s CPU time, 9.9M memory peak.
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Starting Network Manager...
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.6111] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:32e43615-624a-4aad-9e4a-02351ae2816f)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.6115] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.6185] manager[0x55dad9dae000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Starting Hostname Service...
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Started Hostname Service.
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7232] hostname: hostname: using hostnamed
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7233] hostname: static hostname changed from (none) to "np0005555078.novalocal"
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7239] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7245] manager[0x55dad9dae000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7245] manager[0x55dad9dae000]: rfkill: WWAN hardware radio set enabled
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7271] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7271] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7272] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7272] manager: Networking is enabled by state file
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7275] settings: Loaded settings plugin: keyfile (internal)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7280] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7304] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7314] dhcp: init: Using DHCP client 'internal'
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7316] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7323] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7328] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7336] device (lo): Activation: starting connection 'lo' (2718cfd2-2948-42fc-8bc2-9c6575230f6f)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7342] device (eth0): carrier: link connected
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7346] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7350] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7351] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7358] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7364] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7371] device (eth1): carrier: link connected
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7374] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7379] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c8dc28c4-ba83-33d4-a5e8-63663506fb05) (indicated)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7380] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7388] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7396] device (eth1): Activation: starting connection 'Wired connection 1' (c8dc28c4-ba83-33d4-a5e8-63663506fb05)
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Started Network Manager.
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7403] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7407] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7410] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7412] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7414] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7418] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7420] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7423] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7426] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7433] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7436] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7449] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7452] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7468] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7474] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7479] device (lo): Activation: successful, device activated.
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7496] dhcp4 (eth0): state changed new lease, address=38.102.83.2
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7501] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 08:34:07 np0005555078.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7588] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7609] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7611] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7613] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7617] device (eth0): Activation: successful, device activated.
Dec 11 08:34:07 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442047.7622] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 08:34:07 np0005555078.novalocal sudo[7174]: pam_unix(sudo:session): session closed for user root
Dec 11 08:34:08 np0005555078.novalocal python3[7261]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-d2a4-3bf1-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:34:17 np0005555078.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:34:37 np0005555078.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.3678] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 08:34:53 np0005555078.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:34:53 np0005555078.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.3939] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.3942] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.3949] device (eth1): Activation: successful, device activated.
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.3956] manager: startup complete
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.3957] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <warn>  [1765442093.3962] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.3969] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4072] dhcp4 (eth1): canceled DHCP transaction
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4076] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4076] dhcp4 (eth1): state changed no lease
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4094] policy: auto-activating connection 'ci-private-network' (911d1d92-3d36-5779-a68f-4138e40215d3)
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4099] device (eth1): Activation: starting connection 'ci-private-network' (911d1d92-3d36-5779-a68f-4138e40215d3)
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4100] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4103] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4112] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4121] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4168] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4169] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 08:34:53 np0005555078.novalocal NetworkManager[7186]: <info>  [1765442093.4174] device (eth1): Activation: successful, device activated.
Dec 11 08:35:03 np0005555078.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:35:08 np0005555078.novalocal sshd-session[6949]: Received disconnect from 38.102.83.114 port 39498:11: disconnected by user
Dec 11 08:35:08 np0005555078.novalocal sshd-session[6949]: Disconnected from user zuul 38.102.83.114 port 39498
Dec 11 08:35:08 np0005555078.novalocal sshd-session[6946]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:35:08 np0005555078.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 11 08:35:08 np0005555078.novalocal systemd[1]: session-3.scope: Consumed 1.449s CPU time.
Dec 11 08:35:08 np0005555078.novalocal systemd-logind[791]: Session 3 logged out. Waiting for processes to exit.
Dec 11 08:35:08 np0005555078.novalocal systemd-logind[791]: Removed session 3.
Dec 11 08:35:52 np0005555078.novalocal sshd-session[7290]: Accepted publickey for zuul from 38.102.83.114 port 45136 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:35:52 np0005555078.novalocal systemd-logind[791]: New session 4 of user zuul.
Dec 11 08:35:52 np0005555078.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 11 08:35:52 np0005555078.novalocal sshd-session[7290]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:35:52 np0005555078.novalocal sudo[7369]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umnbogjquwjzuvuixbpbvmgwhedyqpdu ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:35:52 np0005555078.novalocal sudo[7369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:35:52 np0005555078.novalocal python3[7371]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:35:52 np0005555078.novalocal sudo[7369]: pam_unix(sudo:session): session closed for user root
Dec 11 08:35:52 np0005555078.novalocal sudo[7442]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opeejfxhdchcvjsiubhazmfkowuetzrf ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 11 08:35:52 np0005555078.novalocal sudo[7442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:35:52 np0005555078.novalocal python3[7444]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765442152.3877466-373-264125953473669/source _original_basename=tmphqm0wsdd follow=False checksum=b377fca187e86c4139f736c3bc95363c4dbb7898 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:35:52 np0005555078.novalocal sudo[7442]: pam_unix(sudo:session): session closed for user root
Dec 11 08:35:56 np0005555078.novalocal sshd-session[7293]: Connection closed by 38.102.83.114 port 45136
Dec 11 08:35:56 np0005555078.novalocal sshd-session[7290]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:35:56 np0005555078.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 11 08:35:56 np0005555078.novalocal systemd-logind[791]: Session 4 logged out. Waiting for processes to exit.
Dec 11 08:35:56 np0005555078.novalocal systemd-logind[791]: Removed session 4.
Dec 11 08:36:33 np0005555078.novalocal systemd[4298]: Created slice User Background Tasks Slice.
Dec 11 08:36:33 np0005555078.novalocal systemd[4298]: Starting Cleanup of User's Temporary Files and Directories...
Dec 11 08:36:33 np0005555078.novalocal systemd[4298]: Finished Cleanup of User's Temporary Files and Directories.
Dec 11 08:42:11 np0005555078.novalocal sshd-session[7474]: Accepted publickey for zuul from 38.102.83.114 port 39528 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:42:11 np0005555078.novalocal systemd-logind[791]: New session 5 of user zuul.
Dec 11 08:42:11 np0005555078.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 11 08:42:11 np0005555078.novalocal sshd-session[7474]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:42:11 np0005555078.novalocal sudo[7501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdbwolwuglnsxzyjnsdofidbianvjewu ; /usr/bin/python3'
Dec 11 08:42:11 np0005555078.novalocal sudo[7501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:12 np0005555078.novalocal python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-1cc7-554c-000000001f1d-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:12 np0005555078.novalocal sudo[7501]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:12 np0005555078.novalocal sudo[7530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnvesotquuhzuopmufclxrshikeunvtj ; /usr/bin/python3'
Dec 11 08:42:12 np0005555078.novalocal sudo[7530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555078.novalocal python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555078.novalocal sudo[7530]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:13 np0005555078.novalocal sudo[7556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjjnmgqkaobohbotmezcvrkkxtfqugp ; /usr/bin/python3'
Dec 11 08:42:13 np0005555078.novalocal sudo[7556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555078.novalocal python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555078.novalocal sudo[7556]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:13 np0005555078.novalocal sudo[7582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgptornksxoefzuzfddgtcxoxojpbykz ; /usr/bin/python3'
Dec 11 08:42:13 np0005555078.novalocal sudo[7582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555078.novalocal python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555078.novalocal sudo[7582]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:13 np0005555078.novalocal sudo[7608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqpmsxfbbebgfzvtkpmxthqpbegvulrd ; /usr/bin/python3'
Dec 11 08:42:13 np0005555078.novalocal sudo[7608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:13 np0005555078.novalocal python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:13 np0005555078.novalocal sudo[7608]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:14 np0005555078.novalocal sudo[7634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eddygjlcgxgwkdukcovwhkghbtngrtqn ; /usr/bin/python3'
Dec 11 08:42:14 np0005555078.novalocal sudo[7634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:14 np0005555078.novalocal python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:14 np0005555078.novalocal sudo[7634]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:14 np0005555078.novalocal sudo[7712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hulbgtmnytzpvithbuxcqhvzfijorpjc ; /usr/bin/python3'
Dec 11 08:42:14 np0005555078.novalocal sudo[7712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:14 np0005555078.novalocal python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:42:14 np0005555078.novalocal sudo[7712]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:14 np0005555078.novalocal sudo[7785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdnqehnstehilxlalgqdgnrckmiwlrsv ; /usr/bin/python3'
Dec 11 08:42:14 np0005555078.novalocal sudo[7785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:15 np0005555078.novalocal python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765442534.4740427-524-8824478426708/source _original_basename=tmpu23htfs9 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:42:15 np0005555078.novalocal sudo[7785]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:15 np0005555078.novalocal sudo[7835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smxynjdcmqxvcwfowzyxdrqcmktvaogy ; /usr/bin/python3'
Dec 11 08:42:15 np0005555078.novalocal sudo[7835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:16 np0005555078.novalocal python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 11 08:42:16 np0005555078.novalocal systemd[1]: Reloading.
Dec 11 08:42:16 np0005555078.novalocal systemd-rc-local-generator[7858]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:42:16 np0005555078.novalocal sudo[7835]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:17 np0005555078.novalocal sudo[7890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqrbnkeqaseemmoiozqrabmsllcsachc ; /usr/bin/python3'
Dec 11 08:42:17 np0005555078.novalocal sudo[7890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:17 np0005555078.novalocal python3[7892]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 11 08:42:17 np0005555078.novalocal sudo[7890]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:18 np0005555078.novalocal sudo[7916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohznlzqfscsjxacjbajojillxfwnbmal ; /usr/bin/python3'
Dec 11 08:42:18 np0005555078.novalocal sudo[7916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:18 np0005555078.novalocal python3[7918]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:18 np0005555078.novalocal sudo[7916]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:18 np0005555078.novalocal sudo[7944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hswghwmjgjrrpxgoppwfzikztzryutrl ; /usr/bin/python3'
Dec 11 08:42:18 np0005555078.novalocal sudo[7944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:18 np0005555078.novalocal python3[7946]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:18 np0005555078.novalocal sudo[7944]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:18 np0005555078.novalocal sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxkdiigchvttnliyrimhxaowhkrauzqx ; /usr/bin/python3'
Dec 11 08:42:18 np0005555078.novalocal sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:18 np0005555078.novalocal python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:18 np0005555078.novalocal sudo[7972]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:18 np0005555078.novalocal sudo[8000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgqhorkwjvjnindmvqyaoxboyiboual ; /usr/bin/python3'
Dec 11 08:42:18 np0005555078.novalocal sudo[8000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:19 np0005555078.novalocal python3[8002]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:19 np0005555078.novalocal sudo[8000]: pam_unix(sudo:session): session closed for user root
Dec 11 08:42:19 np0005555078.novalocal python3[8029]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-1cc7-554c-000000001f24-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:42:20 np0005555078.novalocal python3[8059]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 08:42:23 np0005555078.novalocal sshd-session[7477]: Connection closed by 38.102.83.114 port 39528
Dec 11 08:42:23 np0005555078.novalocal sshd-session[7474]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:42:23 np0005555078.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 11 08:42:23 np0005555078.novalocal systemd[1]: session-5.scope: Consumed 4.044s CPU time.
Dec 11 08:42:23 np0005555078.novalocal systemd-logind[791]: Session 5 logged out. Waiting for processes to exit.
Dec 11 08:42:23 np0005555078.novalocal systemd-logind[791]: Removed session 5.
Dec 11 08:42:25 np0005555078.novalocal sshd-session[8063]: Accepted publickey for zuul from 38.102.83.114 port 34246 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:42:25 np0005555078.novalocal systemd-logind[791]: New session 6 of user zuul.
Dec 11 08:42:25 np0005555078.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 11 08:42:25 np0005555078.novalocal sshd-session[8063]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:42:25 np0005555078.novalocal sudo[8090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtqwtjsprpjpjtocijzhxwwnywuqjqaf ; /usr/bin/python3'
Dec 11 08:42:25 np0005555078.novalocal sudo[8090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:42:25 np0005555078.novalocal python3[8092]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:42:40 np0005555078.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:42:50 np0005555078.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:43:00 np0005555078.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:43:01 np0005555078.novalocal setsebool[8158]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 11 08:43:01 np0005555078.novalocal setsebool[8158]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 08:43:13 np0005555078.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 08:43:31 np0005555078.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 11 08:43:31 np0005555078.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 08:43:31 np0005555078.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 11 08:43:31 np0005555078.novalocal systemd[1]: Reloading.
Dec 11 08:43:31 np0005555078.novalocal systemd-rc-local-generator[8908]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 08:43:31 np0005555078.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 08:43:32 np0005555078.novalocal sudo[8090]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:02 np0005555078.novalocal python3[23103]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-e635-fda5-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:44:03 np0005555078.novalocal kernel: evm: overlay not supported
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: Starting D-Bus User Message Bus...
Dec 11 08:44:03 np0005555078.novalocal dbus-broker-launch[23579]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 11 08:44:03 np0005555078.novalocal dbus-broker-launch[23579]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: Started D-Bus User Message Bus.
Dec 11 08:44:03 np0005555078.novalocal dbus-broker-lau[23579]: Ready
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: Created slice Slice /user.
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: podman-23507.scope: unit configures an IP firewall, but not running as root.
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: (This warning is only shown for the first unit using IP firewalling.)
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: Started podman-23507.scope.
Dec 11 08:44:03 np0005555078.novalocal systemd[4298]: Started podman-pause-56b9dce5.scope.
Dec 11 08:44:05 np0005555078.novalocal sudo[24446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afcutzctqhfammwwezkcxtomyksisizb ; /usr/bin/python3'
Dec 11 08:44:05 np0005555078.novalocal sudo[24446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:05 np0005555078.novalocal python3[24458]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.162:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.162:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:05 np0005555078.novalocal python3[24458]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 11 08:44:05 np0005555078.novalocal sudo[24446]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:06 np0005555078.novalocal sshd-session[8066]: Connection closed by 38.102.83.114 port 34246
Dec 11 08:44:06 np0005555078.novalocal sshd-session[8063]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:44:06 np0005555078.novalocal systemd-logind[791]: Session 6 logged out. Waiting for processes to exit.
Dec 11 08:44:06 np0005555078.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Dec 11 08:44:06 np0005555078.novalocal systemd[1]: session-6.scope: Consumed 1min 4.259s CPU time.
Dec 11 08:44:06 np0005555078.novalocal systemd-logind[791]: Removed session 6.
Dec 11 08:44:19 np0005555078.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 08:44:19 np0005555078.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 11 08:44:19 np0005555078.novalocal systemd[1]: man-db-cache-update.service: Consumed 56.964s CPU time.
Dec 11 08:44:19 np0005555078.novalocal systemd[1]: run-r7e07fa5a83304366a25ec881f8f1244b.service: Deactivated successfully.
Dec 11 08:44:28 np0005555078.novalocal sshd-session[29575]: Connection closed by 38.102.83.179 port 55148 [preauth]
Dec 11 08:44:28 np0005555078.novalocal sshd-session[29577]: Connection closed by 38.102.83.179 port 55160 [preauth]
Dec 11 08:44:28 np0005555078.novalocal sshd-session[29574]: Unable to negotiate with 38.102.83.179 port 55162: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 11 08:44:28 np0005555078.novalocal sshd-session[29573]: Unable to negotiate with 38.102.83.179 port 55172: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 11 08:44:28 np0005555078.novalocal sshd-session[29576]: Unable to negotiate with 38.102.83.179 port 55174: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 11 08:44:33 np0005555078.novalocal sshd-session[29583]: Accepted publickey for zuul from 38.102.83.114 port 54780 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:44:33 np0005555078.novalocal systemd-logind[791]: New session 7 of user zuul.
Dec 11 08:44:33 np0005555078.novalocal systemd[1]: Started Session 7 of User zuul.
Dec 11 08:44:33 np0005555078.novalocal sshd-session[29583]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:44:33 np0005555078.novalocal python3[29610]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBstpakOyiBUkVKE8qhLvJSJmnUPKz1ryqhyWx7jyzgwnQhXG4D3sCzq6j9vQt4UHZd7CtghkmU8N5sKq6RWC78= zuul@np0005555076.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:44:34 np0005555078.novalocal sudo[29634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmykxcjnvanuqqblshvaamknsezcpbha ; /usr/bin/python3'
Dec 11 08:44:34 np0005555078.novalocal sudo[29634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:34 np0005555078.novalocal python3[29636]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBstpakOyiBUkVKE8qhLvJSJmnUPKz1ryqhyWx7jyzgwnQhXG4D3sCzq6j9vQt4UHZd7CtghkmU8N5sKq6RWC78= zuul@np0005555076.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:44:34 np0005555078.novalocal sudo[29634]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:35 np0005555078.novalocal sudo[29660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vumefidkgtcsspylxlnghpztcmahqyqg ; /usr/bin/python3'
Dec 11 08:44:35 np0005555078.novalocal sudo[29660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:35 np0005555078.novalocal python3[29662]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005555078.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 11 08:44:35 np0005555078.novalocal useradd[29664]: new group: name=cloud-admin, GID=1002
Dec 11 08:44:35 np0005555078.novalocal useradd[29664]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 11 08:44:35 np0005555078.novalocal sudo[29660]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:35 np0005555078.novalocal sudo[29694]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fafzsgikvwdiyuocgjpaeyrtaiytsgaq ; /usr/bin/python3'
Dec 11 08:44:35 np0005555078.novalocal sudo[29694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:36 np0005555078.novalocal python3[29696]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBstpakOyiBUkVKE8qhLvJSJmnUPKz1ryqhyWx7jyzgwnQhXG4D3sCzq6j9vQt4UHZd7CtghkmU8N5sKq6RWC78= zuul@np0005555076.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 11 08:44:36 np0005555078.novalocal sudo[29694]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:36 np0005555078.novalocal sudo[29772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbnwgmmetvhlntzlcyvncfbephsycvtn ; /usr/bin/python3'
Dec 11 08:44:36 np0005555078.novalocal sudo[29772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:36 np0005555078.novalocal python3[29774]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:44:36 np0005555078.novalocal sudo[29772]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:36 np0005555078.novalocal sudo[29845]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usjvjbycwmtyoiobsaetoaowchisoofq ; /usr/bin/python3'
Dec 11 08:44:36 np0005555078.novalocal sudo[29845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:37 np0005555078.novalocal python3[29847]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765442676.2467463-168-203724394972024/source _original_basename=tmpqi1yajsk follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:44:37 np0005555078.novalocal sudo[29845]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:37 np0005555078.novalocal sudo[29895]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwshddedtvvgbmajqwhmueylamqyafie ; /usr/bin/python3'
Dec 11 08:44:37 np0005555078.novalocal sudo[29895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:44:38 np0005555078.novalocal python3[29897]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Dec 11 08:44:38 np0005555078.novalocal systemd[1]: Starting Hostname Service...
Dec 11 08:44:38 np0005555078.novalocal systemd[1]: Started Hostname Service.
Dec 11 08:44:38 np0005555078.novalocal systemd-hostnamed[29901]: Changed pretty hostname to 'compute-1'
Dec 11 08:44:38 compute-1 systemd-hostnamed[29901]: Hostname set to <compute-1> (static)
Dec 11 08:44:38 compute-1 NetworkManager[7186]: <info>  [1765442678.2860] hostname: static hostname changed from "np0005555078.novalocal" to "compute-1"
Dec 11 08:44:38 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 08:44:38 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 08:44:38 compute-1 sudo[29895]: pam_unix(sudo:session): session closed for user root
Dec 11 08:44:38 compute-1 sshd-session[29586]: Connection closed by 38.102.83.114 port 54780
Dec 11 08:44:38 compute-1 sshd-session[29583]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:44:38 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Dec 11 08:44:38 compute-1 systemd[1]: session-7.scope: Consumed 2.587s CPU time.
Dec 11 08:44:38 compute-1 systemd-logind[791]: Session 7 logged out. Waiting for processes to exit.
Dec 11 08:44:38 compute-1 systemd-logind[791]: Removed session 7.
Dec 11 08:44:48 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 08:45:08 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 08:45:33 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 11 08:45:33 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 11 08:45:33 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 11 08:45:33 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 11 08:49:47 compute-1 sshd-session[29924]: Accepted publickey for zuul from 38.102.83.179 port 36268 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 08:49:47 compute-1 systemd-logind[791]: New session 8 of user zuul.
Dec 11 08:49:47 compute-1 systemd[1]: Started Session 8 of User zuul.
Dec 11 08:49:47 compute-1 sshd-session[29924]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 08:49:47 compute-1 python3[30000]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 08:49:49 compute-1 sudo[30114]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwwfxfurpdqrtnyoqjxwovznpjkwusx ; /usr/bin/python3'
Dec 11 08:49:49 compute-1 sudo[30114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:49 compute-1 python3[30116]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:49 compute-1 sudo[30114]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:50 compute-1 sudo[30187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbiffromjcoxitfjkgrlhdfjjwjmhjut ; /usr/bin/python3'
Dec 11 08:49:50 compute-1 sudo[30187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:50 compute-1 python3[30189]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.6163914-33956-263948254215993/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:50 compute-1 sudo[30187]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:50 compute-1 sudo[30213]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxummeiyxihjuncbzenhbfwfqyprpyet ; /usr/bin/python3'
Dec 11 08:49:50 compute-1 sudo[30213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:50 compute-1 python3[30215]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:50 compute-1 sudo[30213]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:50 compute-1 sudo[30286]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asxobskrrfxiffqtsqckaeoppckwtrma ; /usr/bin/python3'
Dec 11 08:49:50 compute-1 sudo[30286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:51 compute-1 python3[30288]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.6163914-33956-263948254215993/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:51 compute-1 sudo[30286]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:51 compute-1 sudo[30312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inwtzirmaukxggdcdpgzvzvavkefnczd ; /usr/bin/python3'
Dec 11 08:49:51 compute-1 sudo[30312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:51 compute-1 python3[30314]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:51 compute-1 sudo[30312]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:51 compute-1 sudo[30385]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmgesszmxpbcdvtfdemxilinmmcnqfku ; /usr/bin/python3'
Dec 11 08:49:51 compute-1 sudo[30385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:51 compute-1 python3[30387]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.6163914-33956-263948254215993/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:51 compute-1 sudo[30385]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:51 compute-1 sudo[30411]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-athniyrksrnvznbdinvfwadtfmwzmbba ; /usr/bin/python3'
Dec 11 08:49:51 compute-1 sudo[30411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:51 compute-1 python3[30413]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:51 compute-1 sudo[30411]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-1 sudo[30484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phnqvctbympunjqwfrfjginvgvhnhbvi ; /usr/bin/python3'
Dec 11 08:49:52 compute-1 sudo[30484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:52 compute-1 python3[30486]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.6163914-33956-263948254215993/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:52 compute-1 sudo[30484]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-1 sudo[30510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umtrbtvnuceojgngkkrfxlxeoilwgdia ; /usr/bin/python3'
Dec 11 08:49:52 compute-1 sudo[30510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:52 compute-1 python3[30512]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:52 compute-1 sudo[30510]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-1 sudo[30583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfuhuuthpxvikiyhxealuzmlecwujpup ; /usr/bin/python3'
Dec 11 08:49:52 compute-1 sudo[30583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:52 compute-1 python3[30585]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.6163914-33956-263948254215993/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:52 compute-1 sudo[30583]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:52 compute-1 sudo[30609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liiucrvwzqeamshzjcecfqhmhfzktpzr ; /usr/bin/python3'
Dec 11 08:49:52 compute-1 sudo[30609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:53 compute-1 python3[30611]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:53 compute-1 sudo[30609]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:53 compute-1 sudo[30682]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kftjaktuagwokirzgeyeckwyqlixlrbp ; /usr/bin/python3'
Dec 11 08:49:53 compute-1 sudo[30682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:53 compute-1 python3[30684]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.6163914-33956-263948254215993/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:53 compute-1 sudo[30682]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:53 compute-1 sudo[30708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxtbfzoidceksvfloayecpbgmcuhcdxb ; /usr/bin/python3'
Dec 11 08:49:53 compute-1 sudo[30708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:53 compute-1 python3[30710]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 08:49:53 compute-1 sudo[30708]: pam_unix(sudo:session): session closed for user root
Dec 11 08:49:54 compute-1 sudo[30781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hntbetbvmxousmpijvzsowfjaqcxswpf ; /usr/bin/python3'
Dec 11 08:49:54 compute-1 sudo[30781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 08:49:54 compute-1 python3[30783]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765442989.6163914-33956-263948254215993/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 08:49:54 compute-1 sudo[30781]: pam_unix(sudo:session): session closed for user root
Dec 11 08:50:06 compute-1 python3[30832]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 08:55:06 compute-1 sshd-session[29927]: Received disconnect from 38.102.83.179 port 36268:11: disconnected by user
Dec 11 08:55:06 compute-1 sshd-session[29927]: Disconnected from user zuul 38.102.83.179 port 36268
Dec 11 08:55:06 compute-1 sshd-session[29924]: pam_unix(sshd:session): session closed for user zuul
Dec 11 08:55:06 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Dec 11 08:55:06 compute-1 systemd[1]: session-8.scope: Consumed 5.256s CPU time.
Dec 11 08:55:06 compute-1 systemd-logind[791]: Session 8 logged out. Waiting for processes to exit.
Dec 11 08:55:06 compute-1 systemd-logind[791]: Removed session 8.
Dec 11 09:01:01 compute-1 CROND[30840]: (root) CMD (run-parts /etc/cron.hourly)
Dec 11 09:01:01 compute-1 run-parts[30843]: (/etc/cron.hourly) starting 0anacron
Dec 11 09:01:01 compute-1 anacron[30851]: Anacron started on 2025-12-11
Dec 11 09:01:01 compute-1 anacron[30851]: Will run job `cron.daily' in 37 min.
Dec 11 09:01:01 compute-1 anacron[30851]: Will run job `cron.weekly' in 57 min.
Dec 11 09:01:01 compute-1 anacron[30851]: Will run job `cron.monthly' in 77 min.
Dec 11 09:01:01 compute-1 anacron[30851]: Jobs will be executed sequentially
Dec 11 09:01:01 compute-1 run-parts[30853]: (/etc/cron.hourly) finished 0anacron
Dec 11 09:01:01 compute-1 CROND[30839]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 11 09:03:00 compute-1 sshd-session[30855]: Accepted publickey for zuul from 192.168.122.30 port 35212 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:03:00 compute-1 systemd-logind[791]: New session 9 of user zuul.
Dec 11 09:03:00 compute-1 systemd[1]: Started Session 9 of User zuul.
Dec 11 09:03:00 compute-1 sshd-session[30855]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:03:01 compute-1 python3.9[31008]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:02 compute-1 sudo[31188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqgynsaiqfbrfmcetlgyhbupyrmwjxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443782.528504-58-198946852331828/AnsiballZ_command.py'
Dec 11 09:03:02 compute-1 sudo[31188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:03 compute-1 python3.9[31190]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:03:11 compute-1 sudo[31188]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:12 compute-1 sshd-session[30858]: Connection closed by 192.168.122.30 port 35212
Dec 11 09:03:12 compute-1 sshd-session[30855]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:03:13 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Dec 11 09:03:13 compute-1 systemd[1]: session-9.scope: Consumed 8.614s CPU time.
Dec 11 09:03:13 compute-1 systemd-logind[791]: Session 9 logged out. Waiting for processes to exit.
Dec 11 09:03:13 compute-1 systemd-logind[791]: Removed session 9.
Dec 11 09:03:29 compute-1 sshd-session[31248]: Accepted publickey for zuul from 192.168.122.30 port 53056 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:03:29 compute-1 systemd-logind[791]: New session 10 of user zuul.
Dec 11 09:03:29 compute-1 systemd[1]: Started Session 10 of User zuul.
Dec 11 09:03:29 compute-1 sshd-session[31248]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:03:30 compute-1 python3.9[31401]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 11 09:03:31 compute-1 python3.9[31575]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:32 compute-1 sudo[31725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seeakqrhijweobfzdprmncvphyudebvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443811.9830043-94-144220394509672/AnsiballZ_command.py'
Dec 11 09:03:32 compute-1 sudo[31725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:32 compute-1 python3.9[31727]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:03:32 compute-1 sudo[31725]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:33 compute-1 sudo[31878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nayhumsfaxaatkyfcivafekgwwzgprry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443813.082224-130-84186572916123/AnsiballZ_stat.py'
Dec 11 09:03:33 compute-1 sudo[31878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:33 compute-1 python3.9[31880]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:03:33 compute-1 sudo[31878]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:34 compute-1 sudo[32030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucsimrmilgxmwlqgrueqcsghqyaycpmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443813.9381793-154-195052176621772/AnsiballZ_file.py'
Dec 11 09:03:34 compute-1 sudo[32030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:34 compute-1 python3.9[32032]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:03:34 compute-1 sudo[32030]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:35 compute-1 sudo[32182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuvvasqixvwpfmzxlfxdpypgxbvlswor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443814.7878253-178-242514857146686/AnsiballZ_stat.py'
Dec 11 09:03:35 compute-1 sudo[32182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:35 compute-1 python3.9[32184]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:03:35 compute-1 sudo[32182]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:35 compute-1 sudo[32305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcpomukqpoespvasmxtnkuacuhrdmlmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443814.7878253-178-242514857146686/AnsiballZ_copy.py'
Dec 11 09:03:35 compute-1 sudo[32305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:35 compute-1 python3.9[32307]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765443814.7878253-178-242514857146686/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:03:36 compute-1 sudo[32305]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:36 compute-1 sudo[32457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rowuphocboekledqtiakdoekquyjsduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443816.1903255-223-126722432585226/AnsiballZ_setup.py'
Dec 11 09:03:36 compute-1 sudo[32457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:36 compute-1 python3.9[32459]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:36 compute-1 sudo[32457]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:37 compute-1 sudo[32613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipatqpkmqtknxaorwyofwriaxqntgaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443817.2681096-247-154597903078745/AnsiballZ_file.py'
Dec 11 09:03:37 compute-1 sudo[32613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:37 compute-1 python3.9[32615]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:03:37 compute-1 sudo[32613]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:38 compute-1 sudo[32765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zacednsminlutscdqcwfxjsjyhgckcwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443818.0258904-274-221705003037470/AnsiballZ_file.py'
Dec 11 09:03:38 compute-1 sudo[32765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:38 compute-1 python3.9[32767]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:03:38 compute-1 sudo[32765]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:39 compute-1 python3.9[32917]: ansible-ansible.builtin.service_facts Invoked
Dec 11 09:03:43 compute-1 python3.9[33170]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:03:43 compute-1 python3.9[33320]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:45 compute-1 python3.9[33474]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:03:46 compute-1 sudo[33630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djoriclgbbcarwmgogylqfvbxudcuztv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443825.7732203-418-94112599906631/AnsiballZ_setup.py'
Dec 11 09:03:46 compute-1 sudo[33630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:46 compute-1 python3.9[33632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:03:46 compute-1 sudo[33630]: pam_unix(sudo:session): session closed for user root
Dec 11 09:03:47 compute-1 sudo[33714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hibhfjzyuqbuajxwiegtkwauteuhigmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443825.7732203-418-94112599906631/AnsiballZ_dnf.py'
Dec 11 09:03:47 compute-1 sudo[33714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:03:47 compute-1 python3.9[33716]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:04:32 compute-1 systemd[1]: Reloading.
Dec 11 09:04:32 compute-1 systemd-rc-local-generator[33913]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:04:32 compute-1 systemd[1]: Starting dnf makecache...
Dec 11 09:04:32 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 11 09:04:33 compute-1 dnf[33926]: Failed determining last makecache time.
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-barbican-42b4c41831408a8e323 148 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 165 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-cinder-1c00d6490d88e436f26ef 151 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-python-stevedore-c4acc5639fd2329372142 152 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-python-cloudkitty-tests-tempest-2c80f8 155 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 systemd[1]: Reloading.
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-os-refresh-config-9bfc52b5049be2d8de61 148 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 156 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 systemd-rc-local-generator[33965]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-python-designate-tests-tempest-347fdbc 147 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-glance-1fd12c29b339f30fe823e 163 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 152 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-manila-3c01b7181572c95dac462 170 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-python-whitebox-neutron-tests-tempest- 157 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-octavia-ba397f07a7331190208c 161 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-watcher-c014f81a8647287f6dcc 180 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-ansible-config_template-5ccaa22121a7ff 190 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 151 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-swift-dc98a8463506ac520c469a 200 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 systemd[1]: Reloading.
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-python-tempestconf-8515371b7cceebd4282 191 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 dnf[33926]: delorean-openstack-heat-ui-013accbfd179753bc3f0 176 kB/s | 3.0 kB     00:00
Dec 11 09:04:33 compute-1 systemd-rc-local-generator[34018]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:04:33 compute-1 dnf[33926]: CentOS Stream 9 - BaseOS                         72 kB/s | 7.0 kB     00:00
Dec 11 09:04:33 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 11 09:04:33 compute-1 dnf[33926]: CentOS Stream 9 - AppStream                      71 kB/s | 7.4 kB     00:00
Dec 11 09:04:33 compute-1 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:04:33 compute-1 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:04:33 compute-1 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:04:34 compute-1 dnf[33926]: CentOS Stream 9 - CRB                            68 kB/s | 6.9 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: CentOS Stream 9 - Extras packages                78 kB/s | 8.3 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: dlrn-antelope-testing                           136 kB/s | 3.0 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: dlrn-antelope-build-deps                        183 kB/s | 3.0 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: centos9-rabbitmq                                126 kB/s | 3.0 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: centos9-storage                                 135 kB/s | 3.0 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: centos9-opstools                                147 kB/s | 3.0 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: NFV SIG OpenvSwitch                             153 kB/s | 3.0 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: repo-setup-centos-appstream                     218 kB/s | 4.4 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: repo-setup-centos-baseos                        180 kB/s | 3.9 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: repo-setup-centos-highavailability              193 kB/s | 3.9 kB     00:00
Dec 11 09:04:34 compute-1 dnf[33926]: repo-setup-centos-powertools                    223 kB/s | 4.3 kB     00:00
Dec 11 09:04:35 compute-1 dnf[33926]: Extra Packages for Enterprise Linux 9 - x86_64   33 kB/s |  30 kB     00:00
Dec 11 09:04:36 compute-1 dnf[33926]: Metadata cache created.
Dec 11 09:04:36 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 11 09:04:36 compute-1 systemd[1]: Finished dnf makecache.
Dec 11 09:04:36 compute-1 systemd[1]: dnf-makecache.service: Consumed 1.793s CPU time.
Dec 11 09:05:39 compute-1 kernel: SELinux:  Converting 2719 SID table entries...
Dec 11 09:05:39 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 09:05:39 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 11 09:05:39 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 09:05:39 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 11 09:05:39 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 09:05:39 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 09:05:39 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 09:05:39 compute-1 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 11 09:05:39 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:05:39 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:05:39 compute-1 systemd[1]: Reloading.
Dec 11 09:05:39 compute-1 systemd-rc-local-generator[34388]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:05:39 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:05:40 compute-1 sudo[33714]: pam_unix(sudo:session): session closed for user root
Dec 11 09:05:41 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:05:41 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:05:41 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.246s CPU time.
Dec 11 09:05:41 compute-1 systemd[1]: run-rc297e2bb05194d238fa08ee7194e72d5.service: Deactivated successfully.
Dec 11 09:05:49 compute-1 irqbalance[783]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 11 09:05:49 compute-1 irqbalance[783]: IRQ 26 affinity is now unmanaged
Dec 11 09:06:00 compute-1 sudo[35293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvkundnlilmusozczbqgenrudgqisxno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443960.2922275-455-259479980655533/AnsiballZ_command.py'
Dec 11 09:06:00 compute-1 sudo[35293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:00 compute-1 python3.9[35295]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:01 compute-1 sudo[35293]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:02 compute-1 sudo[35574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdlnvunaufhucaeesajokuimcuimtsze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443961.8735154-478-214889616147421/AnsiballZ_selinux.py'
Dec 11 09:06:02 compute-1 sudo[35574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:02 compute-1 python3.9[35576]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 11 09:06:02 compute-1 sudo[35574]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:03 compute-1 sudo[35726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-murnpadodqjcdvrkjfmspsefoghrswue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443963.2275653-511-150462482407330/AnsiballZ_command.py'
Dec 11 09:06:03 compute-1 sudo[35726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:03 compute-1 python3.9[35728]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 11 09:06:04 compute-1 sudo[35726]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:06 compute-1 sudo[35881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djtxkmtlgzxozchdvkyzvrftoumjgaxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443966.4384546-535-107947302994015/AnsiballZ_file.py'
Dec 11 09:06:06 compute-1 sudo[35881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:07 compute-1 python3.9[35883]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:06:07 compute-1 sudo[35881]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:08 compute-1 sudo[36033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmcmphpkibrhnzmsmavczogjzkflxxqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443967.5608754-559-200760564853217/AnsiballZ_mount.py'
Dec 11 09:06:08 compute-1 sudo[36033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:08 compute-1 python3.9[36035]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 11 09:06:08 compute-1 sudo[36033]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:09 compute-1 irqbalance[783]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 11 09:06:09 compute-1 irqbalance[783]: IRQ 27 affinity is now unmanaged
Dec 11 09:06:10 compute-1 sudo[36185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkhepgptmpcutthgkkyxwuvmocpsewnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443969.6234958-643-254748213583594/AnsiballZ_file.py'
Dec 11 09:06:10 compute-1 sudo[36185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:12 compute-1 python3.9[36187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:12 compute-1 sudo[36185]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:13 compute-1 sudo[36337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlasylnkmbecrdsjrwfmwgyjowozckqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443973.0796268-667-248231949545384/AnsiballZ_stat.py'
Dec 11 09:06:13 compute-1 sudo[36337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:16 compute-1 python3.9[36339]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:06:16 compute-1 sudo[36337]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:16 compute-1 sudo[36460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiibhkbslwohcbvnmfhaagdwisacrqct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443973.0796268-667-248231949545384/AnsiballZ_copy.py'
Dec 11 09:06:16 compute-1 sudo[36460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:16 compute-1 python3.9[36462]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765443973.0796268-667-248231949545384/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=b8579c206b05c2d6a847310b06d4e3aec15650c5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:06:16 compute-1 sudo[36460]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:17 compute-1 sudo[36612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bujucjzadhmmpyhuaffoorpvpmvcwqbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443977.7180011-739-42747783611770/AnsiballZ_stat.py'
Dec 11 09:06:17 compute-1 sudo[36612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:18 compute-1 python3.9[36614]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:06:18 compute-1 sudo[36612]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:18 compute-1 sudo[36764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqixqtsuwspzloujvmljqjpbqkdqxoin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443978.3988073-763-254843376613191/AnsiballZ_command.py'
Dec 11 09:06:18 compute-1 sudo[36764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:18 compute-1 python3.9[36766]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:18 compute-1 sudo[36764]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:19 compute-1 sudo[36917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljdzuikwusdhlpfppfgepsvtkgpiceie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443979.182134-787-186170773526469/AnsiballZ_file.py'
Dec 11 09:06:19 compute-1 sudo[36917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:19 compute-1 python3.9[36919]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:06:19 compute-1 sudo[36917]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:21 compute-1 sudo[37069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebmmrrxepcmrjfwpzgkhkvsuxaufxuov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443980.8335512-820-232902653214942/AnsiballZ_getent.py'
Dec 11 09:06:21 compute-1 sudo[37069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:21 compute-1 python3.9[37071]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 11 09:06:21 compute-1 sudo[37069]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:21 compute-1 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 09:06:22 compute-1 sudo[37223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvxuebeifvqsgdtipskarqckinajxylf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443981.8318627-844-21215429595561/AnsiballZ_group.py'
Dec 11 09:06:22 compute-1 sudo[37223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:22 compute-1 python3.9[37225]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 09:06:22 compute-1 groupadd[37226]: group added to /etc/group: name=qemu, GID=107
Dec 11 09:06:22 compute-1 groupadd[37226]: group added to /etc/gshadow: name=qemu
Dec 11 09:06:22 compute-1 groupadd[37226]: new group: name=qemu, GID=107
Dec 11 09:06:22 compute-1 sudo[37223]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:23 compute-1 sudo[37381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzmbqwrfkzfovrbvjxlulgezexsvuovu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443982.848506-868-46558698611825/AnsiballZ_user.py'
Dec 11 09:06:23 compute-1 sudo[37381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:23 compute-1 python3.9[37383]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 09:06:23 compute-1 useradd[37385]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 11 09:06:23 compute-1 sudo[37381]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:24 compute-1 sudo[37541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhsxoxpqqizvxwvgytapskvnsgczgtya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443984.0167396-892-194262484352126/AnsiballZ_getent.py'
Dec 11 09:06:24 compute-1 sudo[37541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:24 compute-1 python3.9[37543]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 11 09:06:24 compute-1 sudo[37541]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:25 compute-1 sudo[37694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mytjqpdjuquxvorhkccdfsgusljozqva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443984.7828739-916-265971966768345/AnsiballZ_group.py'
Dec 11 09:06:25 compute-1 sudo[37694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:25 compute-1 python3.9[37696]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 09:06:25 compute-1 groupadd[37697]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 11 09:06:25 compute-1 groupadd[37697]: group added to /etc/gshadow: name=hugetlbfs
Dec 11 09:06:25 compute-1 groupadd[37697]: new group: name=hugetlbfs, GID=42477
Dec 11 09:06:25 compute-1 sudo[37694]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:26 compute-1 sudo[37852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neyloharcgtqviagwqtqqwitgezugobf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443985.751589-943-173051377425059/AnsiballZ_file.py'
Dec 11 09:06:26 compute-1 sudo[37852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:26 compute-1 python3.9[37854]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 11 09:06:26 compute-1 sudo[37852]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:27 compute-1 sudo[38004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gninoifwgcezgxxlnkwieukdhnholoxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443986.773045-976-166428763891534/AnsiballZ_dnf.py'
Dec 11 09:06:27 compute-1 sudo[38004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:27 compute-1 python3.9[38006]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:06:29 compute-1 sudo[38004]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:30 compute-1 sudo[38158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhsfzwetwwibhdwxhtobsakpjyqsogve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443990.577239-1000-212240865644729/AnsiballZ_file.py'
Dec 11 09:06:30 compute-1 sudo[38158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:31 compute-1 python3.9[38160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:31 compute-1 sudo[38158]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:31 compute-1 sudo[38310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkfpzyinnrzjmmlyzdxeallxpeyewskf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443991.269622-1024-82582175547724/AnsiballZ_stat.py'
Dec 11 09:06:31 compute-1 sudo[38310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:31 compute-1 python3.9[38312]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:06:31 compute-1 sudo[38310]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:32 compute-1 sudo[38433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lakhpuerhkzmpasyaucjmlvzckgeganc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443991.269622-1024-82582175547724/AnsiballZ_copy.py'
Dec 11 09:06:32 compute-1 sudo[38433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:32 compute-1 python3.9[38435]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765443991.269622-1024-82582175547724/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:32 compute-1 sudo[38433]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:33 compute-1 sudo[38585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtydrnvuahxijmcszcqgdfrhmqccadvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443992.7226224-1069-119208345473349/AnsiballZ_systemd.py'
Dec 11 09:06:33 compute-1 sudo[38585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:33 compute-1 python3.9[38587]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:06:33 compute-1 systemd[1]: Starting Load Kernel Modules...
Dec 11 09:06:33 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 11 09:06:33 compute-1 kernel: Bridge firewalling registered
Dec 11 09:06:33 compute-1 systemd-modules-load[38591]: Inserted module 'br_netfilter'
Dec 11 09:06:33 compute-1 systemd[1]: Finished Load Kernel Modules.
Dec 11 09:06:33 compute-1 sudo[38585]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:34 compute-1 sudo[38745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkoounuparvqbdefiugvgqxmldkkvamf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443993.913785-1093-130620140935649/AnsiballZ_stat.py'
Dec 11 09:06:34 compute-1 sudo[38745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:34 compute-1 python3.9[38747]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:06:34 compute-1 sudo[38745]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:34 compute-1 sudo[38868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyrazzbfgtlzyupovpyvliksnvsdnnne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443993.913785-1093-130620140935649/AnsiballZ_copy.py'
Dec 11 09:06:34 compute-1 sudo[38868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:34 compute-1 python3.9[38870]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765443993.913785-1093-130620140935649/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:06:34 compute-1 sudo[38868]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:35 compute-1 sudo[39020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obeqzhfxzvewmzbvvvnajadtwdbcdjhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765443995.3838286-1147-189312875671442/AnsiballZ_dnf.py'
Dec 11 09:06:35 compute-1 sudo[39020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:35 compute-1 python3.9[39022]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:06:39 compute-1 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:06:39 compute-1 dbus-broker-launch[746]: Noticed file-system modification, trigger reload.
Dec 11 09:06:39 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:06:39 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:06:39 compute-1 systemd[1]: Reloading.
Dec 11 09:06:39 compute-1 systemd-rc-local-generator[39087]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:06:39 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:06:40 compute-1 sudo[39020]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:42 compute-1 python3.9[41650]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:06:43 compute-1 python3.9[42552]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 11 09:06:43 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:06:43 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:06:43 compute-1 systemd[1]: man-db-cache-update.service: Consumed 4.785s CPU time.
Dec 11 09:06:43 compute-1 systemd[1]: run-r73e2f7c7ab5f4be3b674dda78aed5141.service: Deactivated successfully.
Dec 11 09:06:43 compute-1 python3.9[43043]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:06:44 compute-1 sudo[43193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeqsgjpyzjlljvtzcvhqegmfsybtnzis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444004.2332718-1264-241549839941812/AnsiballZ_command.py'
Dec 11 09:06:44 compute-1 sudo[43193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:44 compute-1 python3.9[43195]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:44 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 11 09:06:45 compute-1 systemd[1]: Starting Authorization Manager...
Dec 11 09:06:45 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 11 09:06:45 compute-1 polkitd[43412]: Started polkitd version 0.117
Dec 11 09:06:45 compute-1 polkitd[43412]: Loading rules from directory /etc/polkit-1/rules.d
Dec 11 09:06:45 compute-1 polkitd[43412]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 11 09:06:45 compute-1 polkitd[43412]: Finished loading, compiling and executing 2 rules
Dec 11 09:06:45 compute-1 polkitd[43412]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 11 09:06:45 compute-1 systemd[1]: Started Authorization Manager.
Dec 11 09:06:45 compute-1 sudo[43193]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:45 compute-1 sudo[43580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgzhdneusdrxkvpmaneyppnuqmccfnwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444005.6499612-1291-20879699678707/AnsiballZ_systemd.py'
Dec 11 09:06:45 compute-1 sudo[43580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:46 compute-1 python3.9[43582]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:06:46 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 11 09:06:46 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Dec 11 09:06:46 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 11 09:06:46 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 11 09:06:46 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 11 09:06:46 compute-1 sudo[43580]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:47 compute-1 python3.9[43743]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 11 09:06:50 compute-1 sudo[43893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-islxaxafkoaudeefcdrqgnolaeygwnmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444010.5342977-1462-259285076670542/AnsiballZ_systemd.py'
Dec 11 09:06:50 compute-1 sudo[43893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:51 compute-1 python3.9[43895]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:06:51 compute-1 systemd[1]: Reloading.
Dec 11 09:06:51 compute-1 systemd-rc-local-generator[43926]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:06:51 compute-1 sudo[43893]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:51 compute-1 sudo[44083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fytsbrgimfpxhntmidubpytizafwzjhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444011.5297549-1462-240509648299156/AnsiballZ_systemd.py'
Dec 11 09:06:51 compute-1 sudo[44083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:52 compute-1 python3.9[44085]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:06:52 compute-1 systemd[1]: Reloading.
Dec 11 09:06:52 compute-1 systemd-rc-local-generator[44111]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:06:52 compute-1 sudo[44083]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:53 compute-1 sudo[44272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idipomjnrgjccqogdynzhhjkmvsojtcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444012.7979162-1510-125210570433302/AnsiballZ_command.py'
Dec 11 09:06:53 compute-1 sudo[44272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:53 compute-1 python3.9[44274]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:53 compute-1 sudo[44272]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:54 compute-1 sudo[44425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttpvohiyryhwtjlxvzijvzuyuysefsgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444013.8425183-1534-49725985973504/AnsiballZ_command.py'
Dec 11 09:06:54 compute-1 sudo[44425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:54 compute-1 python3.9[44427]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:54 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 11 09:06:54 compute-1 sudo[44425]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:54 compute-1 sudo[44578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojkiulszkxibkqowgbdtdakoxntknygc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444014.5154068-1558-108467992678506/AnsiballZ_command.py'
Dec 11 09:06:54 compute-1 sudo[44578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:55 compute-1 python3.9[44580]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:56 compute-1 sudo[44578]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:57 compute-1 sudo[44740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwgrqffptuactycexqxclauslyncnnuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444016.9206622-1582-39542729582048/AnsiballZ_command.py'
Dec 11 09:06:57 compute-1 sudo[44740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:57 compute-1 python3.9[44742]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:06:57 compute-1 sudo[44740]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:57 compute-1 sudo[44893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgokaxfsjlfrpadeysocgufjuvqoctmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444017.5915775-1606-186593711065881/AnsiballZ_systemd.py'
Dec 11 09:06:57 compute-1 sudo[44893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:06:58 compute-1 python3.9[44895]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:06:58 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 11 09:06:58 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Dec 11 09:06:58 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Dec 11 09:06:58 compute-1 systemd[1]: Starting Apply Kernel Variables...
Dec 11 09:06:58 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 11 09:06:58 compute-1 systemd[1]: Finished Apply Kernel Variables.
Dec 11 09:06:58 compute-1 sudo[44893]: pam_unix(sudo:session): session closed for user root
Dec 11 09:06:58 compute-1 sshd-session[31251]: Connection closed by 192.168.122.30 port 53056
Dec 11 09:06:58 compute-1 sshd-session[31248]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:06:58 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Dec 11 09:06:58 compute-1 systemd[1]: session-10.scope: Consumed 2min 15.925s CPU time.
Dec 11 09:06:58 compute-1 systemd-logind[791]: Session 10 logged out. Waiting for processes to exit.
Dec 11 09:06:58 compute-1 systemd-logind[791]: Removed session 10.
Dec 11 09:07:04 compute-1 sshd-session[44927]: Accepted publickey for zuul from 192.168.122.30 port 55606 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:07:04 compute-1 systemd-logind[791]: New session 11 of user zuul.
Dec 11 09:07:04 compute-1 systemd[1]: Started Session 11 of User zuul.
Dec 11 09:07:04 compute-1 sshd-session[44927]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:07:05 compute-1 python3.9[45080]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:07:06 compute-1 sudo[45234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzykyfmczrvikpyohbhvpcqwljqjpkno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444026.4308777-69-169553129814478/AnsiballZ_getent.py'
Dec 11 09:07:06 compute-1 sudo[45234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:07 compute-1 python3.9[45236]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 11 09:07:07 compute-1 sudo[45234]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:07 compute-1 sudo[45387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yojsjfvopubsbuduhjcaxqxqutofxdbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444027.3263116-93-90890851774203/AnsiballZ_group.py'
Dec 11 09:07:07 compute-1 sudo[45387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:07 compute-1 python3.9[45389]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 11 09:07:07 compute-1 groupadd[45390]: group added to /etc/group: name=openvswitch, GID=42476
Dec 11 09:07:07 compute-1 groupadd[45390]: group added to /etc/gshadow: name=openvswitch
Dec 11 09:07:07 compute-1 groupadd[45390]: new group: name=openvswitch, GID=42476
Dec 11 09:07:08 compute-1 sudo[45387]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:08 compute-1 sudo[45545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thlcqwtpemnwgrlucahmevtbwixsergy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444028.2839887-117-70214639232273/AnsiballZ_user.py'
Dec 11 09:07:08 compute-1 sudo[45545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:08 compute-1 python3.9[45547]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 11 09:07:08 compute-1 useradd[45549]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 11 09:07:08 compute-1 useradd[45549]: add 'openvswitch' to group 'hugetlbfs'
Dec 11 09:07:08 compute-1 useradd[45549]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 11 09:07:09 compute-1 sudo[45545]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:09 compute-1 sudo[45705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlbnyocdwxfjozwcwewqdqgslncrozzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444029.4930751-147-163011517325507/AnsiballZ_setup.py'
Dec 11 09:07:09 compute-1 sudo[45705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:10 compute-1 python3.9[45707]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:07:10 compute-1 sudo[45705]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:10 compute-1 sudo[45789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqqxbsjgyqjozmfaycfrevtzktmyeeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444029.4930751-147-163011517325507/AnsiballZ_dnf.py'
Dec 11 09:07:10 compute-1 sudo[45789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:11 compute-1 python3.9[45791]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 09:07:13 compute-1 sudo[45789]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:14 compute-1 sudo[45955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmoofqgtedyavnclhtdmkfhvyblnikoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444034.5782974-189-46372032354209/AnsiballZ_dnf.py'
Dec 11 09:07:14 compute-1 sudo[45955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:15 compute-1 python3.9[45957]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:26 compute-1 kernel: SELinux:  Converting 2731 SID table entries...
Dec 11 09:07:26 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 09:07:26 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 11 09:07:26 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 09:07:26 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 11 09:07:26 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 09:07:26 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 09:07:26 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 09:07:26 compute-1 groupadd[45980]: group added to /etc/group: name=unbound, GID=993
Dec 11 09:07:26 compute-1 groupadd[45980]: group added to /etc/gshadow: name=unbound
Dec 11 09:07:26 compute-1 groupadd[45980]: new group: name=unbound, GID=993
Dec 11 09:07:26 compute-1 useradd[45987]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 11 09:07:27 compute-1 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 11 09:07:27 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 11 09:07:28 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:07:28 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:07:28 compute-1 systemd[1]: Reloading.
Dec 11 09:07:28 compute-1 systemd-rc-local-generator[46484]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:28 compute-1 systemd-sysv-generator[46488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:28 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:07:29 compute-1 sudo[45955]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:29 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:07:29 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:07:29 compute-1 systemd[1]: run-rd68e23101dc74e41be3f458ba080307f.service: Deactivated successfully.
Dec 11 09:07:33 compute-1 sudo[47053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnmrnpsmkjappiskfucswveqyzkuantr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444053.1983476-213-267808929976637/AnsiballZ_systemd.py'
Dec 11 09:07:33 compute-1 sudo[47053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:34 compute-1 python3.9[47055]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 11 09:07:34 compute-1 systemd[1]: Reloading.
Dec 11 09:07:34 compute-1 systemd-rc-local-generator[47081]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:34 compute-1 systemd-sysv-generator[47084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:34 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Dec 11 09:07:34 compute-1 chown[47097]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 11 09:07:34 compute-1 ovs-ctl[47102]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 11 09:07:34 compute-1 ovs-ctl[47102]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 11 09:07:34 compute-1 ovs-ctl[47102]: Starting ovsdb-server [  OK  ]
Dec 11 09:07:34 compute-1 ovs-vsctl[47151]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 11 09:07:34 compute-1 ovs-vsctl[47167]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"74ac8996-e2be-4a89-907f-aa7acffb19b6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 11 09:07:34 compute-1 ovs-ctl[47102]: Configuring Open vSwitch system IDs [  OK  ]
Dec 11 09:07:34 compute-1 ovs-vsctl[47175]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 11 09:07:34 compute-1 ovs-ctl[47102]: Enabling remote OVSDB managers [  OK  ]
Dec 11 09:07:34 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Dec 11 09:07:34 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 11 09:07:34 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 11 09:07:34 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 11 09:07:35 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Dec 11 09:07:35 compute-1 ovs-ctl[47221]: Inserting openvswitch module [  OK  ]
Dec 11 09:07:35 compute-1 ovs-ctl[47190]: Starting ovs-vswitchd [  OK  ]
Dec 11 09:07:35 compute-1 ovs-vsctl[47239]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Dec 11 09:07:35 compute-1 ovs-ctl[47190]: Enabling remote OVSDB managers [  OK  ]
Dec 11 09:07:35 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 11 09:07:35 compute-1 systemd[1]: Starting Open vSwitch...
Dec 11 09:07:35 compute-1 systemd[1]: Finished Open vSwitch.
Dec 11 09:07:35 compute-1 sudo[47053]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:36 compute-1 python3.9[47390]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:07:36 compute-1 sudo[47540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-walwmosoyoilogiqjskgljdszlxvdgnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444056.583124-267-226779467728295/AnsiballZ_sefcontext.py'
Dec 11 09:07:36 compute-1 sudo[47540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:37 compute-1 python3.9[47542]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 11 09:07:38 compute-1 kernel: SELinux:  Converting 2745 SID table entries...
Dec 11 09:07:38 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Dec 11 09:07:38 compute-1 kernel: SELinux:  policy capability open_perms=1
Dec 11 09:07:38 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Dec 11 09:07:38 compute-1 kernel: SELinux:  policy capability always_check_network=0
Dec 11 09:07:38 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 11 09:07:38 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 11 09:07:38 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 11 09:07:38 compute-1 sudo[47540]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:39 compute-1 python3.9[47697]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:07:40 compute-1 sudo[47853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndyrynpimclzikrbfkqnfwnfefggahiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444060.185137-321-27522440523260/AnsiballZ_dnf.py'
Dec 11 09:07:40 compute-1 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 11 09:07:40 compute-1 sudo[47853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:40 compute-1 python3.9[47855]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:41 compute-1 sudo[47853]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:42 compute-1 sudo[48006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylydkbxujwkqfribazrlxpewestwymod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444062.4449832-345-275402733438116/AnsiballZ_command.py'
Dec 11 09:07:42 compute-1 sudo[48006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:43 compute-1 python3.9[48008]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:07:43 compute-1 sudo[48006]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:44 compute-1 sudo[48293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytxhetohpijbexvdrqsurncgupfyucjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444063.9945903-369-47380379033745/AnsiballZ_file.py'
Dec 11 09:07:44 compute-1 sudo[48293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:44 compute-1 python3.9[48295]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 11 09:07:44 compute-1 sudo[48293]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:45 compute-1 python3.9[48445]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:07:45 compute-1 sudo[48597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxcovhvodixusrfqdcndddpzamxyeeem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444065.5430884-417-140718883467753/AnsiballZ_dnf.py'
Dec 11 09:07:45 compute-1 sudo[48597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:46 compute-1 python3.9[48599]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:48 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:07:48 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:07:48 compute-1 systemd[1]: Reloading.
Dec 11 09:07:48 compute-1 systemd-rc-local-generator[48634]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:48 compute-1 systemd-sysv-generator[48640]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:48 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:07:48 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:07:48 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:07:48 compute-1 systemd[1]: run-r8e9981f417984dd99b00c602dd014bfe.service: Deactivated successfully.
Dec 11 09:07:48 compute-1 sudo[48597]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:50 compute-1 sudo[48914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiearfvtmzgggctypysssgiucdbdmzmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444070.056988-441-188626320328076/AnsiballZ_systemd.py'
Dec 11 09:07:50 compute-1 sudo[48914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:50 compute-1 python3.9[48916]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:07:50 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 11 09:07:50 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Dec 11 09:07:50 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Dec 11 09:07:50 compute-1 systemd[1]: Stopping Network Manager...
Dec 11 09:07:50 compute-1 NetworkManager[7186]: <info>  [1765444070.6584] caught SIGTERM, shutting down normally.
Dec 11 09:07:50 compute-1 NetworkManager[7186]: <info>  [1765444070.6610] dhcp4 (eth0): canceled DHCP transaction
Dec 11 09:07:50 compute-1 NetworkManager[7186]: <info>  [1765444070.6611] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 09:07:50 compute-1 NetworkManager[7186]: <info>  [1765444070.6611] dhcp4 (eth0): state changed no lease
Dec 11 09:07:50 compute-1 NetworkManager[7186]: <info>  [1765444070.6615] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 09:07:50 compute-1 NetworkManager[7186]: <info>  [1765444070.6690] exiting (success)
Dec 11 09:07:50 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 09:07:50 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 09:07:50 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 11 09:07:50 compute-1 systemd[1]: Stopped Network Manager.
Dec 11 09:07:50 compute-1 systemd[1]: NetworkManager.service: Consumed 13.527s CPU time, 4.1M memory peak, read 0B from disk, written 32.0K to disk.
Dec 11 09:07:50 compute-1 systemd[1]: Starting Network Manager...
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.7413] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:32e43615-624a-4aad-9e4a-02351ae2816f)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.7414] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.7472] manager[0x55cc9c056000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 11 09:07:50 compute-1 systemd[1]: Starting Hostname Service...
Dec 11 09:07:50 compute-1 systemd[1]: Started Hostname Service.
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8299] hostname: hostname: using hostnamed
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8300] hostname: static hostname changed from (none) to "compute-1"
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8306] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8314] manager[0x55cc9c056000]: rfkill: Wi-Fi hardware radio set enabled
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8315] manager[0x55cc9c056000]: rfkill: WWAN hardware radio set enabled
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8338] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8346] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8346] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8347] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8347] manager: Networking is enabled by state file
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8350] settings: Loaded settings plugin: keyfile (internal)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8353] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8386] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8396] dhcp: init: Using DHCP client 'internal'
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8399] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8405] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8410] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8419] device (lo): Activation: starting connection 'lo' (2718cfd2-2948-42fc-8bc2-9c6575230f6f)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8425] device (eth0): carrier: link connected
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8430] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8436] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8436] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8442] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8449] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8456] device (eth1): carrier: link connected
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8460] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8467] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (911d1d92-3d36-5779-a68f-4138e40215d3) (indicated)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8468] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8474] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8481] device (eth1): Activation: starting connection 'ci-private-network' (911d1d92-3d36-5779-a68f-4138e40215d3)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8488] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 11 09:07:50 compute-1 systemd[1]: Started Network Manager.
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8499] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8502] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8504] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8506] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8510] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8514] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8516] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8520] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8528] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8532] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8542] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8557] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8569] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8571] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8577] device (lo): Activation: successful, device activated.
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8585] dhcp4 (eth0): state changed new lease, address=38.102.83.2
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8592] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8669] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8679] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8682] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8685] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8688] device (eth1): Activation: successful, device activated.
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8699] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8702] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8706] manager: NetworkManager state is now CONNECTED_SITE
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8708] device (eth0): Activation: successful, device activated.
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8714] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 11 09:07:50 compute-1 systemd[1]: Starting Network Manager Wait Online...
Dec 11 09:07:50 compute-1 NetworkManager[48931]: <info>  [1765444070.8719] manager: startup complete
Dec 11 09:07:50 compute-1 sudo[48914]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:50 compute-1 systemd[1]: Finished Network Manager Wait Online.
Dec 11 09:07:51 compute-1 sudo[49141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bphuornpcckemexgiwkdnqmzsjtikygc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444071.0738106-465-257946916821940/AnsiballZ_dnf.py'
Dec 11 09:07:51 compute-1 sudo[49141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:51 compute-1 python3.9[49143]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:07:56 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 11 09:07:56 compute-1 systemd[1]: Starting man-db-cache-update.service...
Dec 11 09:07:56 compute-1 systemd[1]: Reloading.
Dec 11 09:07:57 compute-1 systemd-rc-local-generator[49200]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:07:57 compute-1 systemd-sysv-generator[49203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:07:57 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 11 09:07:58 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 11 09:07:58 compute-1 systemd[1]: Finished man-db-cache-update.service.
Dec 11 09:07:58 compute-1 systemd[1]: run-re05f66e8d65b4a78b2338f7e715eb2c3.service: Deactivated successfully.
Dec 11 09:07:58 compute-1 sudo[49141]: pam_unix(sudo:session): session closed for user root
Dec 11 09:07:59 compute-1 sudo[49601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccfiwgxfpcyyyvptzxaxrqnesbxpvkgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444079.3224618-501-43896292531081/AnsiballZ_stat.py'
Dec 11 09:07:59 compute-1 sudo[49601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:07:59 compute-1 python3.9[49603]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:07:59 compute-1 sudo[49601]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:00 compute-1 sudo[49753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkggdfpkxjcrqsdexdsmdidmfalhsyav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444080.033766-528-61942591932180/AnsiballZ_ini_file.py'
Dec 11 09:08:00 compute-1 sudo[49753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:00 compute-1 python3.9[49755]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:00 compute-1 sudo[49753]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:01 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 09:08:01 compute-1 sudo[49907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgipssvymopusqrheovqpndxtimmnbty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444081.0054703-558-114509643290161/AnsiballZ_ini_file.py'
Dec 11 09:08:01 compute-1 sudo[49907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:01 compute-1 python3.9[49909]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:01 compute-1 sudo[49907]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:01 compute-1 sudo[50059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeegtbaqairhtijlesomnujpzxiiifpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444081.7081115-558-44979837837583/AnsiballZ_ini_file.py'
Dec 11 09:08:01 compute-1 sudo[50059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:02 compute-1 python3.9[50061]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:02 compute-1 sudo[50059]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:02 compute-1 sudo[50211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taiycodavkhfwcuqlfnzfkfwpgkeufns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444082.3387456-603-34426037099970/AnsiballZ_ini_file.py'
Dec 11 09:08:02 compute-1 sudo[50211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:02 compute-1 python3.9[50213]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:02 compute-1 sudo[50211]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:03 compute-1 sudo[50363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkwogxersgemonfvjodrrssmwnfevpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444082.9476674-603-23936376336666/AnsiballZ_ini_file.py'
Dec 11 09:08:03 compute-1 sudo[50363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:03 compute-1 python3.9[50365]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:03 compute-1 sudo[50363]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:04 compute-1 sudo[50515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxnamdqrexbefwtoqnnhfxbuaqusxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444084.0310605-648-248616349677863/AnsiballZ_stat.py'
Dec 11 09:08:04 compute-1 sudo[50515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:04 compute-1 python3.9[50517]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:04 compute-1 sudo[50515]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:04 compute-1 sudo[50638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apxtxnxapcddxpkzkxdxobewsmzntyis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444084.0310605-648-248616349677863/AnsiballZ_copy.py'
Dec 11 09:08:04 compute-1 sudo[50638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:05 compute-1 python3.9[50640]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444084.0310605-648-248616349677863/.source _original_basename=.9ljahvg7 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:05 compute-1 sudo[50638]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:05 compute-1 sudo[50790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaeppgegaynnmwefapbvsjjqpbzlkpss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444085.3953812-693-55833366591011/AnsiballZ_file.py'
Dec 11 09:08:05 compute-1 sudo[50790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:05 compute-1 python3.9[50792]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:05 compute-1 sudo[50790]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:06 compute-1 sudo[50942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuazvtzmzexexxbocadgodmhmmcmhfgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444086.0663543-717-72693253043997/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 11 09:08:06 compute-1 sudo[50942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:06 compute-1 python3.9[50944]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 11 09:08:06 compute-1 sudo[50942]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:07 compute-1 sudo[51094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dszcpzbqmaubwhkqoxdxdpuaolptpbxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444086.9139109-744-126563297061553/AnsiballZ_file.py'
Dec 11 09:08:07 compute-1 sudo[51094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:07 compute-1 python3.9[51096]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:07 compute-1 sudo[51094]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:08 compute-1 sudo[51246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkgulbroybttfirhezbacqfquvbxatpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444087.7987149-774-22337544257525/AnsiballZ_stat.py'
Dec 11 09:08:08 compute-1 sudo[51246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:08 compute-1 sudo[51246]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:08 compute-1 sudo[51369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izazfkpefknvlijoarusmelytzctshms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444087.7987149-774-22337544257525/AnsiballZ_copy.py'
Dec 11 09:08:08 compute-1 sudo[51369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:08 compute-1 sudo[51369]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:09 compute-1 sudo[51521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmaxkjnwhjwgbmgektkvpkrsxjlajlpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444089.1736646-819-208897501457744/AnsiballZ_slurp.py'
Dec 11 09:08:09 compute-1 sudo[51521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:09 compute-1 python3.9[51523]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 11 09:08:09 compute-1 sudo[51521]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:10 compute-1 sudo[51696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osgfnhefdvvcpfbjqietjcsunajmrerq ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444090.0871806-846-58564772487607/async_wrapper.py j820014434979 300 /home/zuul/.ansible/tmp/ansible-tmp-1765444090.0871806-846-58564772487607/AnsiballZ_edpm_os_net_config.py _'
Dec 11 09:08:10 compute-1 sudo[51696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:10 compute-1 ansible-async_wrapper.py[51698]: Invoked with j820014434979 300 /home/zuul/.ansible/tmp/ansible-tmp-1765444090.0871806-846-58564772487607/AnsiballZ_edpm_os_net_config.py _
Dec 11 09:08:10 compute-1 ansible-async_wrapper.py[51701]: Starting module and watcher
Dec 11 09:08:10 compute-1 ansible-async_wrapper.py[51701]: Start watching 51702 (300)
Dec 11 09:08:10 compute-1 ansible-async_wrapper.py[51702]: Start module (51702)
Dec 11 09:08:10 compute-1 ansible-async_wrapper.py[51698]: Return async_wrapper task started.
Dec 11 09:08:10 compute-1 sudo[51696]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:11 compute-1 python3.9[51703]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 11 09:08:11 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 11 09:08:11 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 11 09:08:11 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 11 09:08:11 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 11 09:08:11 compute-1 kernel: cfg80211: failed to load regulatory.db
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1158] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1186] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1803] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1808] audit: op="connection-add" uuid="72bbe578-880c-4227-bfa9-bc4ce7047db7" name="br-ex-br" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1822] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1824] audit: op="connection-add" uuid="bfb06465-2fc8-45ae-ab74-2ed02101d6bf" name="br-ex-port" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1838] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1840] audit: op="connection-add" uuid="a9ed5072-60e1-460b-9ebf-fbeada6bcdb5" name="eth1-port" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1854] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1856] audit: op="connection-add" uuid="0dc58be8-89a0-483c-a88a-4c58bf934df3" name="vlan20-port" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1870] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1872] audit: op="connection-add" uuid="d12e538d-e36f-473d-a8ea-b42fef90777b" name="vlan21-port" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1889] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1891] audit: op="connection-add" uuid="8505c977-7bae-44da-b895-f6117a3c1d4a" name="vlan22-port" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1905] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1907] audit: op="connection-add" uuid="67cbe97c-9ea0-4842-ab25-1f0de98472ab" name="vlan23-port" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1928] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1949] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.1952] audit: op="connection-add" uuid="113e1dee-0073-4ae3-b05c-eebcd7ed797c" name="br-ex-if" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2014] audit: op="connection-update" uuid="911d1d92-3d36-5779-a68f-4138e40215d3" name="ci-private-network" args="ipv4.dns,ipv4.addresses,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.routing-rules,ovs-interface.type,ovs-external-ids.data,connection.controller,connection.port-type,connection.slave-type,connection.master,connection.timestamp,ipv6.routes,ipv6.dns,ipv6.addresses,ipv6.routing-rules,ipv6.method,ipv6.addr-gen-mode" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2034] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2036] audit: op="connection-add" uuid="c287ce8b-42e4-4b17-b3f6-288c949d5100" name="vlan20-if" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2056] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2059] audit: op="connection-add" uuid="de5783e3-a6af-4c20-974c-a7adf267e0ba" name="vlan21-if" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2077] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2080] audit: op="connection-add" uuid="40c3cc10-2894-4aa2-b4c6-894c02917a86" name="vlan22-if" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2097] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2101] audit: op="connection-add" uuid="6d8b6014-0909-43f3-a958-c04c91c73369" name="vlan23-if" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2115] audit: op="connection-delete" uuid="c8dc28c4-ba83-33d4-a5e8-63663506fb05" name="Wired connection 1" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2128] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2132] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2142] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2145] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (72bbe578-880c-4227-bfa9-bc4ce7047db7)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2146] audit: op="connection-activate" uuid="72bbe578-880c-4227-bfa9-bc4ce7047db7" name="br-ex-br" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2149] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2151] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2159] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2166] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (bfb06465-2fc8-45ae-ab74-2ed02101d6bf)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2169] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2172] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2178] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2184] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a9ed5072-60e1-460b-9ebf-fbeada6bcdb5)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2186] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2187] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2193] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2197] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (0dc58be8-89a0-483c-a88a-4c58bf934df3)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2199] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2200] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2204] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2208] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (d12e538d-e36f-473d-a8ea-b42fef90777b)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2210] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2210] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2215] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2218] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (8505c977-7bae-44da-b895-f6117a3c1d4a)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2220] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2221] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2225] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2229] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (67cbe97c-9ea0-4842-ab25-1f0de98472ab)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2229] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2232] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2234] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2239] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2240] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2243] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2246] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (113e1dee-0073-4ae3-b05c-eebcd7ed797c)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2246] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2249] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2250] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2251] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2252] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2262] device (eth1): disconnecting for new activation request.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2263] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2266] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2267] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2268] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2271] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2272] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2275] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2279] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (c287ce8b-42e4-4b17-b3f6-288c949d5100)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2280] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2283] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2285] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2286] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2289] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2290] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2293] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2297] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (de5783e3-a6af-4c20-974c-a7adf267e0ba)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2298] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2301] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2303] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2304] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2307] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2308] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2312] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2316] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (40c3cc10-2894-4aa2-b4c6-894c02917a86)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2317] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2319] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2321] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2323] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2326] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <warn>  [1765444093.2327] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2330] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2335] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (6d8b6014-0909-43f3-a958-c04c91c73369)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2336] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2339] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2341] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2343] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2345] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2356] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2358] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2361] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2363] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2368] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2372] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2375] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2379] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2380] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2385] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2388] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2391] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2393] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2397] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2401] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2404] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2406] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2410] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2414] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2417] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2418] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2423] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2426] dhcp4 (eth0): canceled DHCP transaction
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2427] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2427] dhcp4 (eth0): state changed no lease
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2428] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2437] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51704 uid=0 result="fail" reason="Device is not activated"
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2485] dhcp4 (eth0): state changed new lease, address=38.102.83.2
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2712] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 11 09:08:13 compute-1 kernel: ovs-system: entered promiscuous mode
Dec 11 09:08:13 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2724] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2730] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2738] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2746] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2750] device (eth1): disconnecting for new activation request.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2751] audit: op="connection-activate" uuid="911d1d92-3d36-5779-a68f-4138e40215d3" name="ci-private-network" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 kernel: Timeout policy base is empty
Dec 11 09:08:13 compute-1 systemd-udevd[51708]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2778] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51704 uid=0 result="success"
Dec 11 09:08:13 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2844] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2867] device (eth1): Activation: starting connection 'ci-private-network' (911d1d92-3d36-5779-a68f-4138e40215d3)
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2874] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2876] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2880] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2882] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2893] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2896] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2904] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2909] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2922] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2924] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2928] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2932] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2938] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2941] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2944] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2946] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2947] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2948] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2950] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2955] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2958] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2965] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2969] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2972] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2975] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2978] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2984] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2985] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.2989] device (eth1): Activation: successful, device activated.
Dec 11 09:08:13 compute-1 kernel: br-ex: entered promiscuous mode
Dec 11 09:08:13 compute-1 kernel: vlan22: entered promiscuous mode
Dec 11 09:08:13 compute-1 systemd-udevd[51709]: Network interface NamePolicy= disabled on kernel command line.
Dec 11 09:08:13 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3285] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3304] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3329] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3332] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3340] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3357] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3371] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 kernel: vlan23: entered promiscuous mode
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3417] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3418] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3423] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 kernel: vlan21: entered promiscuous mode
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3499] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3520] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 kernel: vlan20: entered promiscuous mode
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3546] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3556] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3560] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3568] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3581] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3623] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3628] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3634] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3645] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3664] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3702] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3705] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 11 09:08:13 compute-1 NetworkManager[48931]: <info>  [1765444093.3710] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 11 09:08:14 compute-1 NetworkManager[48931]: <info>  [1765444094.4725] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51704 uid=0 result="success"
Dec 11 09:08:14 compute-1 sudo[52059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbpkfeeajroaddsqcbolakkpedptvkrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444094.0397642-846-221768706124262/AnsiballZ_async_status.py'
Dec 11 09:08:14 compute-1 sudo[52059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:14 compute-1 NetworkManager[48931]: <info>  [1765444094.6351] checkpoint[0x55cc9c02b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 11 09:08:14 compute-1 NetworkManager[48931]: <info>  [1765444094.6354] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51704 uid=0 result="success"
Dec 11 09:08:14 compute-1 python3.9[52061]: ansible-ansible.legacy.async_status Invoked with jid=j820014434979.51698 mode=status _async_dir=/root/.ansible_async
Dec 11 09:08:14 compute-1 sudo[52059]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:14 compute-1 NetworkManager[48931]: <info>  [1765444094.9277] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51704 uid=0 result="success"
Dec 11 09:08:14 compute-1 NetworkManager[48931]: <info>  [1765444094.9287] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51704 uid=0 result="success"
Dec 11 09:08:15 compute-1 NetworkManager[48931]: <info>  [1765444095.3118] audit: op="networking-control" arg="global-dns-configuration" pid=51704 uid=0 result="success"
Dec 11 09:08:15 compute-1 NetworkManager[48931]: <info>  [1765444095.3156] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 11 09:08:15 compute-1 NetworkManager[48931]: <info>  [1765444095.3196] audit: op="networking-control" arg="global-dns-configuration" pid=51704 uid=0 result="success"
Dec 11 09:08:15 compute-1 NetworkManager[48931]: <info>  [1765444095.3234] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51704 uid=0 result="success"
Dec 11 09:08:15 compute-1 NetworkManager[48931]: <info>  [1765444095.5021] checkpoint[0x55cc9c02ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 11 09:08:15 compute-1 NetworkManager[48931]: <info>  [1765444095.5027] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51704 uid=0 result="success"
Dec 11 09:08:15 compute-1 ansible-async_wrapper.py[51702]: Module complete (51702)
Dec 11 09:08:15 compute-1 ansible-async_wrapper.py[51701]: Done in kid B.
Dec 11 09:08:17 compute-1 sudo[52165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfkmazdrqkcfszhrvffaurrtgkqqqobd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444094.0397642-846-221768706124262/AnsiballZ_async_status.py'
Dec 11 09:08:17 compute-1 sudo[52165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:18 compute-1 python3.9[52167]: ansible-ansible.legacy.async_status Invoked with jid=j820014434979.51698 mode=status _async_dir=/root/.ansible_async
Dec 11 09:08:18 compute-1 sudo[52165]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:18 compute-1 sudo[52265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bztbuuijuzamxsmusvenkorlchnepynj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444094.0397642-846-221768706124262/AnsiballZ_async_status.py'
Dec 11 09:08:18 compute-1 sudo[52265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:18 compute-1 python3.9[52267]: ansible-ansible.legacy.async_status Invoked with jid=j820014434979.51698 mode=cleanup _async_dir=/root/.ansible_async
Dec 11 09:08:18 compute-1 sudo[52265]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:19 compute-1 sudo[52417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raijvahfdutoaoggtfsfyxtfgnqjqqjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444098.889499-927-131080322379814/AnsiballZ_stat.py'
Dec 11 09:08:19 compute-1 sudo[52417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:19 compute-1 python3.9[52419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:19 compute-1 sudo[52417]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:19 compute-1 sudo[52540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdslxuzplheewzqmhofaboedrrmlqgwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444098.889499-927-131080322379814/AnsiballZ_copy.py'
Dec 11 09:08:19 compute-1 sudo[52540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:19 compute-1 python3.9[52542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444098.889499-927-131080322379814/.source.returncode _original_basename=.4s2ymhp4 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:19 compute-1 sudo[52540]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:20 compute-1 sudo[52692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpgqgocjssfhxhmbwmqruhuyvptvwobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444100.2941241-975-99840228728353/AnsiballZ_stat.py'
Dec 11 09:08:20 compute-1 sudo[52692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:20 compute-1 python3.9[52694]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:20 compute-1 sudo[52692]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:20 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 11 09:08:21 compute-1 sudo[52817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckbnzveayeueftfgpcipxbcetbqxjglu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444100.2941241-975-99840228728353/AnsiballZ_copy.py'
Dec 11 09:08:21 compute-1 sudo[52817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:21 compute-1 python3.9[52819]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444100.2941241-975-99840228728353/.source.cfg _original_basename=.0_tgp87t follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:21 compute-1 sudo[52817]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:21 compute-1 sudo[52970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mryzavyphuklmerygtcuqisurobbdkjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444101.533929-1020-262953641314305/AnsiballZ_systemd.py'
Dec 11 09:08:21 compute-1 sudo[52970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:22 compute-1 python3.9[52972]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:08:22 compute-1 systemd[1]: Reloading Network Manager...
Dec 11 09:08:22 compute-1 NetworkManager[48931]: <info>  [1765444102.2237] audit: op="reload" arg="0" pid=52976 uid=0 result="success"
Dec 11 09:08:22 compute-1 NetworkManager[48931]: <info>  [1765444102.2244] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 11 09:08:22 compute-1 systemd[1]: Reloaded Network Manager.
Dec 11 09:08:22 compute-1 sudo[52970]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:22 compute-1 sshd-session[44930]: Connection closed by 192.168.122.30 port 55606
Dec 11 09:08:22 compute-1 sshd-session[44927]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:08:22 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Dec 11 09:08:22 compute-1 systemd[1]: session-11.scope: Consumed 50.029s CPU time.
Dec 11 09:08:22 compute-1 systemd-logind[791]: Session 11 logged out. Waiting for processes to exit.
Dec 11 09:08:22 compute-1 systemd-logind[791]: Removed session 11.
Dec 11 09:08:28 compute-1 sshd-session[53007]: Accepted publickey for zuul from 192.168.122.30 port 55354 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:08:28 compute-1 systemd-logind[791]: New session 12 of user zuul.
Dec 11 09:08:28 compute-1 systemd[1]: Started Session 12 of User zuul.
Dec 11 09:08:28 compute-1 sshd-session[53007]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:08:29 compute-1 python3.9[53160]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:30 compute-1 python3.9[53314]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:08:31 compute-1 python3.9[53508]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:08:32 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 11 09:08:32 compute-1 sshd-session[53010]: Connection closed by 192.168.122.30 port 55354
Dec 11 09:08:32 compute-1 sshd-session[53007]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:08:32 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Dec 11 09:08:32 compute-1 systemd[1]: session-12.scope: Consumed 2.256s CPU time.
Dec 11 09:08:32 compute-1 systemd-logind[791]: Session 12 logged out. Waiting for processes to exit.
Dec 11 09:08:32 compute-1 systemd-logind[791]: Removed session 12.
Dec 11 09:08:38 compute-1 sshd-session[53537]: Accepted publickey for zuul from 192.168.122.30 port 34726 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:08:38 compute-1 systemd-logind[791]: New session 13 of user zuul.
Dec 11 09:08:38 compute-1 systemd[1]: Started Session 13 of User zuul.
Dec 11 09:08:38 compute-1 sshd-session[53537]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:08:39 compute-1 python3.9[53690]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:40 compute-1 python3.9[53844]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:41 compute-1 sudo[53999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvdjiwloeqewkldhcbgjtoczcrmsjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444120.9090428-81-69036125744305/AnsiballZ_setup.py'
Dec 11 09:08:41 compute-1 sudo[53999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:41 compute-1 python3.9[54001]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:08:41 compute-1 sudo[53999]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:42 compute-1 sudo[54083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysmnfkozbykmrprculuvitwfpiixygrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444120.9090428-81-69036125744305/AnsiballZ_dnf.py'
Dec 11 09:08:42 compute-1 sudo[54083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:42 compute-1 python3.9[54085]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:08:43 compute-1 sudo[54083]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:44 compute-1 sudo[54237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvptpidavdnvcxrkxlzpjcbtpyzldqnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444124.282607-117-224839340413311/AnsiballZ_setup.py'
Dec 11 09:08:44 compute-1 sudo[54237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:44 compute-1 python3.9[54239]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:08:45 compute-1 sudo[54237]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:45 compute-1 sudo[54432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiozeunfztwqifrefjawyidrspecqktz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444125.5255861-150-191425623276757/AnsiballZ_file.py'
Dec 11 09:08:45 compute-1 sudo[54432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:46 compute-1 python3.9[54434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:46 compute-1 sudo[54432]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:46 compute-1 sudo[54584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esxislhtscqimodtqcayvoisijqbsrpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444126.3262794-174-251737229457812/AnsiballZ_command.py'
Dec 11 09:08:46 compute-1 sudo[54584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:46 compute-1 python3.9[54586]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:08:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat944129523-merged.mount: Deactivated successfully.
Dec 11 09:08:47 compute-1 podman[54587]: 2025-12-11 09:08:47.090265019 +0000 UTC m=+0.103980470 system refresh
Dec 11 09:08:47 compute-1 sudo[54584]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:47 compute-1 sudo[54748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfgozpszodsicyizfluzsdeozmmdypuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444127.42757-198-145248214146809/AnsiballZ_stat.py'
Dec 11 09:08:47 compute-1 sudo[54748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:48 compute-1 python3.9[54750]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:48 compute-1 sudo[54748]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:48 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:08:48 compute-1 sudo[54871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpzzrybguwxhwyczvysibrhofcfbnpun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444127.42757-198-145248214146809/AnsiballZ_copy.py'
Dec 11 09:08:48 compute-1 sudo[54871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:48 compute-1 python3.9[54873]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444127.42757-198-145248214146809/.source.json follow=False _original_basename=podman_network_config.j2 checksum=e961937973dd22e036d1720fb08d6e3ad33112f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:08:48 compute-1 sudo[54871]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:49 compute-1 sudo[55023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trejublzevhgnnyfklgznvmyuuaxowmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444128.9302716-243-75396109426201/AnsiballZ_stat.py'
Dec 11 09:08:49 compute-1 sudo[55023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:49 compute-1 python3.9[55025]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:08:49 compute-1 sudo[55023]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:49 compute-1 sudo[55146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbikewpicihpqdzidbqgqgysvrftqdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444128.9302716-243-75396109426201/AnsiballZ_copy.py'
Dec 11 09:08:49 compute-1 sudo[55146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:50 compute-1 python3.9[55148]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765444128.9302716-243-75396109426201/.source.conf follow=False _original_basename=registries.conf.j2 checksum=aa15b84f7f4c5c1e005ae51043980351730af2c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:50 compute-1 sudo[55146]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:50 compute-1 sudo[55298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeiigbejnclutmxmzisjfaocixidswqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444130.3447394-291-169294795483924/AnsiballZ_ini_file.py'
Dec 11 09:08:50 compute-1 sudo[55298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:50 compute-1 python3.9[55300]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:51 compute-1 sudo[55298]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:51 compute-1 sudo[55450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeudnblzumtrndxkztbrswoiiockktmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444131.1438107-291-144631765668115/AnsiballZ_ini_file.py'
Dec 11 09:08:51 compute-1 sudo[55450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:51 compute-1 python3.9[55452]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:51 compute-1 sudo[55450]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:52 compute-1 sudo[55602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpulxgibbgmhirnbahgnzqksehuujlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444131.8380098-291-91653175649869/AnsiballZ_ini_file.py'
Dec 11 09:08:52 compute-1 sudo[55602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:52 compute-1 python3.9[55604]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:52 compute-1 sudo[55602]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:52 compute-1 sudo[55754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-livasvvcpsbyqqxijgrybghamuaxojfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444132.463971-291-31551387089383/AnsiballZ_ini_file.py'
Dec 11 09:08:52 compute-1 sudo[55754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:52 compute-1 python3.9[55756]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:08:53 compute-1 sudo[55754]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:53 compute-1 sudo[55906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nesosgtsoamiczmrzmiakupunbrcbexz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444133.3230438-384-281194310535444/AnsiballZ_dnf.py'
Dec 11 09:08:53 compute-1 sudo[55906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:53 compute-1 python3.9[55908]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:08:55 compute-1 sudo[55906]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:56 compute-1 sudo[56059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcnltksqifjduzaltwhmrqrdmvmldcff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444135.8761668-417-202158645925375/AnsiballZ_setup.py'
Dec 11 09:08:56 compute-1 sudo[56059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:56 compute-1 python3.9[56061]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:08:56 compute-1 sudo[56059]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:56 compute-1 sudo[56213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvrmbixoifigzzaukeajkbqohbevjmsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444136.6438894-441-53774164342788/AnsiballZ_stat.py'
Dec 11 09:08:56 compute-1 sudo[56213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:57 compute-1 python3.9[56215]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:08:57 compute-1 sudo[56213]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:57 compute-1 sudo[56365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxtjiogufolhddebtihlkqemapbjagiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444137.338785-468-264448224491320/AnsiballZ_stat.py'
Dec 11 09:08:57 compute-1 sudo[56365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:57 compute-1 python3.9[56367]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:08:57 compute-1 sudo[56365]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:58 compute-1 sudo[56517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iseeqfgvpxdbkgwyaaxrhauqizfjsmfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444138.1605113-498-178717252817428/AnsiballZ_command.py'
Dec 11 09:08:58 compute-1 sudo[56517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:58 compute-1 python3.9[56519]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:08:58 compute-1 sudo[56517]: pam_unix(sudo:session): session closed for user root
Dec 11 09:08:59 compute-1 sudo[56670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkyhugfkyqktywyancovaygmklytdraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444139.0643594-528-124192179391007/AnsiballZ_service_facts.py'
Dec 11 09:08:59 compute-1 sudo[56670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:08:59 compute-1 python3.9[56672]: ansible-service_facts Invoked
Dec 11 09:08:59 compute-1 network[56689]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 09:08:59 compute-1 network[56690]: 'network-scripts' will be removed from distribution in near future.
Dec 11 09:08:59 compute-1 network[56691]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 09:09:03 compute-1 sudo[56670]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:04 compute-1 sudo[56974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsflmqzyurndneusqzvkyeajjtnsiuic ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765444144.3413525-573-265746811701499/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765444144.3413525-573-265746811701499/args'
Dec 11 09:09:04 compute-1 sudo[56974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:04 compute-1 sudo[56974]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:05 compute-1 sudo[57141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcnoymqewdqcanxalojxnavrgezgnwbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444145.1593566-606-238501640771657/AnsiballZ_dnf.py'
Dec 11 09:09:05 compute-1 sudo[57141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:05 compute-1 python3.9[57143]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 11 09:09:06 compute-1 sudo[57141]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:08 compute-1 sudo[57294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtjkvnskwpcoisefvxtswbznthycmpfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444148.0352077-646-222950643692379/AnsiballZ_package_facts.py'
Dec 11 09:09:08 compute-1 sudo[57294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:08 compute-1 python3.9[57296]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 11 09:09:09 compute-1 sudo[57294]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:10 compute-1 sudo[57446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ithkzcioewzxfjwvqihdxmqcoildnoaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444149.927399-676-198225207300873/AnsiballZ_stat.py'
Dec 11 09:09:10 compute-1 sudo[57446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:10 compute-1 python3.9[57448]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:10 compute-1 sudo[57446]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:10 compute-1 sudo[57571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgoqmdafwnenkxjpkwklacuiftzmdkfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444149.927399-676-198225207300873/AnsiballZ_copy.py'
Dec 11 09:09:10 compute-1 sudo[57571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:11 compute-1 python3.9[57573]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444149.927399-676-198225207300873/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:11 compute-1 sudo[57571]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:11 compute-1 sudo[57725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uudlkpacbavpadgygyvxpidrqmidiash ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444151.3317096-721-137247792924934/AnsiballZ_stat.py'
Dec 11 09:09:11 compute-1 sudo[57725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:11 compute-1 python3.9[57727]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:11 compute-1 sudo[57725]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:12 compute-1 sudo[57850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cehpoelwqvngauvjfjcygdwffkznismq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444151.3317096-721-137247792924934/AnsiballZ_copy.py'
Dec 11 09:09:12 compute-1 sudo[57850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:12 compute-1 python3.9[57852]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444151.3317096-721-137247792924934/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:12 compute-1 sudo[57850]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:13 compute-1 sudo[58004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlitbjoyksxsjnavjlplfpqvlurradnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444153.5468223-784-265063853004175/AnsiballZ_lineinfile.py'
Dec 11 09:09:13 compute-1 sudo[58004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:14 compute-1 python3.9[58006]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:14 compute-1 sudo[58004]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:15 compute-1 sudo[58158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzyxyvflzalyeojwxbmjseyphgqnfarl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444155.210962-829-89383593438407/AnsiballZ_setup.py'
Dec 11 09:09:15 compute-1 sudo[58158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:15 compute-1 python3.9[58160]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:09:16 compute-1 sudo[58158]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:16 compute-1 sudo[58242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqjoonzltevwdollkggpyymcukqqwmgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444155.210962-829-89383593438407/AnsiballZ_systemd.py'
Dec 11 09:09:16 compute-1 sudo[58242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:17 compute-1 python3.9[58244]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:17 compute-1 sudo[58242]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:18 compute-1 sudo[58396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqmekdzxlgnpuqoulgmmpcrgptbqlhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444158.03626-876-51699695642783/AnsiballZ_setup.py'
Dec 11 09:09:18 compute-1 sudo[58396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:18 compute-1 python3.9[58398]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:09:18 compute-1 sudo[58396]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:19 compute-1 sudo[58480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swwaictogohdjlriodhliudkysloiynm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444158.03626-876-51699695642783/AnsiballZ_systemd.py'
Dec 11 09:09:19 compute-1 sudo[58480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:19 compute-1 python3.9[58482]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:09:19 compute-1 systemd[1]: Stopping NTP client/server...
Dec 11 09:09:19 compute-1 chronyd[787]: chronyd exiting
Dec 11 09:09:19 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Dec 11 09:09:19 compute-1 systemd[1]: Stopped NTP client/server.
Dec 11 09:09:19 compute-1 systemd[1]: Starting NTP client/server...
Dec 11 09:09:19 compute-1 chronyd[58491]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 11 09:09:19 compute-1 chronyd[58491]: Frequency -26.955 +/- 0.473 ppm read from /var/lib/chrony/drift
Dec 11 09:09:19 compute-1 chronyd[58491]: Loaded seccomp filter (level 2)
Dec 11 09:09:19 compute-1 systemd[1]: Started NTP client/server.
Dec 11 09:09:19 compute-1 sudo[58480]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:20 compute-1 sshd-session[53540]: Connection closed by 192.168.122.30 port 34726
Dec 11 09:09:20 compute-1 sshd-session[53537]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:09:20 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Dec 11 09:09:20 compute-1 systemd[1]: session-13.scope: Consumed 26.320s CPU time.
Dec 11 09:09:20 compute-1 systemd-logind[791]: Session 13 logged out. Waiting for processes to exit.
Dec 11 09:09:20 compute-1 systemd-logind[791]: Removed session 13.
Dec 11 09:09:26 compute-1 sshd-session[58517]: Accepted publickey for zuul from 192.168.122.30 port 56838 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:09:26 compute-1 systemd-logind[791]: New session 14 of user zuul.
Dec 11 09:09:26 compute-1 systemd[1]: Started Session 14 of User zuul.
Dec 11 09:09:26 compute-1 sshd-session[58517]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:09:27 compute-1 sudo[58670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lacxpmblcvkaciexieozwbcldaovqhzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444166.9094505-27-166514933985806/AnsiballZ_file.py'
Dec 11 09:09:27 compute-1 sudo[58670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:27 compute-1 python3.9[58672]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:27 compute-1 sudo[58670]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:28 compute-1 sudo[58822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwmcgefqnxkaxodlvdoczjzgquxojqsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444167.8317082-63-191839866818159/AnsiballZ_stat.py'
Dec 11 09:09:28 compute-1 sudo[58822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:28 compute-1 python3.9[58824]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:28 compute-1 sudo[58822]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:28 compute-1 sudo[58945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tujjgzcmwoofcdzooymlykmpkrggepev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444167.8317082-63-191839866818159/AnsiballZ_copy.py'
Dec 11 09:09:28 compute-1 sudo[58945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:29 compute-1 python3.9[58947]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444167.8317082-63-191839866818159/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:29 compute-1 sudo[58945]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:29 compute-1 sshd-session[58520]: Connection closed by 192.168.122.30 port 56838
Dec 11 09:09:29 compute-1 sshd-session[58517]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:09:29 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Dec 11 09:09:29 compute-1 systemd[1]: session-14.scope: Consumed 1.670s CPU time.
Dec 11 09:09:29 compute-1 systemd-logind[791]: Session 14 logged out. Waiting for processes to exit.
Dec 11 09:09:29 compute-1 systemd-logind[791]: Removed session 14.
Dec 11 09:09:35 compute-1 sshd-session[58972]: Accepted publickey for zuul from 192.168.122.30 port 52704 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:09:35 compute-1 systemd-logind[791]: New session 15 of user zuul.
Dec 11 09:09:35 compute-1 systemd[1]: Started Session 15 of User zuul.
Dec 11 09:09:35 compute-1 sshd-session[58972]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:09:37 compute-1 python3.9[59125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:09:38 compute-1 sudo[59279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmbvihgrsbruieopvmsjjrnlvlieftdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444177.4825137-60-182852331891767/AnsiballZ_file.py'
Dec 11 09:09:38 compute-1 sudo[59279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:38 compute-1 python3.9[59281]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:38 compute-1 sudo[59279]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:39 compute-1 sudo[59454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqyhkelwapvkbcvajijeydsxunumunxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444178.558295-84-193943541983094/AnsiballZ_stat.py'
Dec 11 09:09:39 compute-1 sudo[59454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:39 compute-1 python3.9[59456]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:39 compute-1 sudo[59454]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:39 compute-1 sudo[59577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdncgytkmtowptmdrevbvebwxgztxvaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444178.558295-84-193943541983094/AnsiballZ_copy.py'
Dec 11 09:09:39 compute-1 sudo[59577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:39 compute-1 python3.9[59579]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765444178.558295-84-193943541983094/.source.json _original_basename=.xhm5l34p follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:39 compute-1 sudo[59577]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:40 compute-1 sudo[59729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdkgxeitfdaakwekgoteyrdgcemcyvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444180.4682307-153-71008290318902/AnsiballZ_stat.py'
Dec 11 09:09:40 compute-1 sudo[59729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:40 compute-1 python3.9[59731]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:41 compute-1 sudo[59729]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:41 compute-1 sudo[59852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uccxrdtvdocsetbfsofuuskrnbzxdfjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444180.4682307-153-71008290318902/AnsiballZ_copy.py'
Dec 11 09:09:41 compute-1 sudo[59852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:41 compute-1 python3.9[59854]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444180.4682307-153-71008290318902/.source _original_basename=.17mn_jap follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:41 compute-1 sudo[59852]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:42 compute-1 sudo[60004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hripnxitrtekyxywqgammnzayredpcpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444181.8191917-201-72801540997167/AnsiballZ_file.py'
Dec 11 09:09:42 compute-1 sudo[60004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:42 compute-1 python3.9[60006]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:09:42 compute-1 sudo[60004]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:42 compute-1 sshd-session[60007]: Invalid user admin from 78.128.112.74 port 58454
Dec 11 09:09:42 compute-1 sudo[60158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryhwmwscgnkcmlndneubqotewakdsjze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444182.4550457-225-118394607028959/AnsiballZ_stat.py'
Dec 11 09:09:42 compute-1 sudo[60158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:42 compute-1 sshd-session[60007]: Connection closed by invalid user admin 78.128.112.74 port 58454 [preauth]
Dec 11 09:09:42 compute-1 python3.9[60160]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:42 compute-1 sudo[60158]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:43 compute-1 sudo[60281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuyycyvbgpnmidpswuckoebgxexatwgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444182.4550457-225-118394607028959/AnsiballZ_copy.py'
Dec 11 09:09:43 compute-1 sudo[60281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:43 compute-1 python3.9[60283]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765444182.4550457-225-118394607028959/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:09:43 compute-1 sudo[60281]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:43 compute-1 sudo[60433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxwomnpxafiyaiiyyuffmanpgjrlreht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444183.66463-225-46492568083108/AnsiballZ_stat.py'
Dec 11 09:09:43 compute-1 sudo[60433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:44 compute-1 python3.9[60435]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:44 compute-1 sudo[60433]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:44 compute-1 sudo[60556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyvfqugbkkhgpaqlvzcnhhzzrazzmydc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444183.66463-225-46492568083108/AnsiballZ_copy.py'
Dec 11 09:09:44 compute-1 sudo[60556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:44 compute-1 python3.9[60558]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765444183.66463-225-46492568083108/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 11 09:09:44 compute-1 sudo[60556]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:45 compute-1 sudo[60708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qywtypvmetxsnjcmjqpujhbinqkarknu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444184.832938-312-238394569687010/AnsiballZ_file.py'
Dec 11 09:09:45 compute-1 sudo[60708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:45 compute-1 python3.9[60710]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:45 compute-1 sudo[60708]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:45 compute-1 sudo[60860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcogckxcixxcfgpliltljznrukantbqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444185.5101385-336-33093975885823/AnsiballZ_stat.py'
Dec 11 09:09:45 compute-1 sudo[60860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:46 compute-1 python3.9[60862]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:46 compute-1 sudo[60860]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:46 compute-1 sudo[60983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxohtclikzjakqxnfjvghagdldvslwld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444185.5101385-336-33093975885823/AnsiballZ_copy.py'
Dec 11 09:09:46 compute-1 sudo[60983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:46 compute-1 python3.9[60985]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444185.5101385-336-33093975885823/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:46 compute-1 sudo[60983]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:47 compute-1 sudo[61135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csjruayikknwgvvdipxqbkiipyegimeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444186.8313055-381-274357287091276/AnsiballZ_stat.py'
Dec 11 09:09:47 compute-1 sudo[61135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:47 compute-1 python3.9[61137]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:47 compute-1 sudo[61135]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:47 compute-1 sudo[61258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uamfizosoxiwmwvsewbrlnwqmkjvprfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444186.8313055-381-274357287091276/AnsiballZ_copy.py'
Dec 11 09:09:47 compute-1 sudo[61258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:47 compute-1 python3.9[61260]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444186.8313055-381-274357287091276/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:47 compute-1 sudo[61258]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:48 compute-1 sudo[61410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkcgupoykrazomzmfwfqqsazssjxhvyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444188.0679538-426-105020809756680/AnsiballZ_systemd.py'
Dec 11 09:09:48 compute-1 sudo[61410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:48 compute-1 python3.9[61412]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:48 compute-1 systemd[1]: Reloading.
Dec 11 09:09:49 compute-1 systemd-rc-local-generator[61440]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:49 compute-1 systemd-sysv-generator[61443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:49 compute-1 systemd[1]: Reloading.
Dec 11 09:09:49 compute-1 systemd-rc-local-generator[61480]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:49 compute-1 systemd-sysv-generator[61484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:49 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Dec 11 09:09:49 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Dec 11 09:09:49 compute-1 sudo[61410]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:49 compute-1 sudo[61639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksyqsfsxfqciffeuqewhmmcserrjnrns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444189.6791887-450-113896306150931/AnsiballZ_stat.py'
Dec 11 09:09:49 compute-1 sudo[61639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:50 compute-1 python3.9[61641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:50 compute-1 sudo[61639]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:50 compute-1 sudo[61762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mscxsqksczbwjlxfqfxoumemjmzqmiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444189.6791887-450-113896306150931/AnsiballZ_copy.py'
Dec 11 09:09:50 compute-1 sudo[61762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:50 compute-1 python3.9[61764]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444189.6791887-450-113896306150931/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:50 compute-1 sudo[61762]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:51 compute-1 sudo[61914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iebbstoqbjhztirvligavropvwmwchqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444190.9212945-495-121246257420719/AnsiballZ_stat.py'
Dec 11 09:09:51 compute-1 sudo[61914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:51 compute-1 python3.9[61916]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:09:51 compute-1 sudo[61914]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:51 compute-1 sudo[62037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptpokvykyzicxoyapcylsyapstrndcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444190.9212945-495-121246257420719/AnsiballZ_copy.py'
Dec 11 09:09:51 compute-1 sudo[62037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:52 compute-1 python3.9[62039]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444190.9212945-495-121246257420719/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:09:52 compute-1 sudo[62037]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:52 compute-1 sudo[62189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwyfeknktlfsmqglqolukgyntiinpadh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444192.2570539-540-176777457732436/AnsiballZ_systemd.py'
Dec 11 09:09:52 compute-1 sudo[62189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:53 compute-1 python3.9[62191]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:53 compute-1 systemd[1]: Reloading.
Dec 11 09:09:53 compute-1 systemd-rc-local-generator[62220]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:53 compute-1 systemd-sysv-generator[62223]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:53 compute-1 systemd[1]: Reloading.
Dec 11 09:09:53 compute-1 systemd-rc-local-generator[62253]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:53 compute-1 systemd-sysv-generator[62259]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:53 compute-1 systemd[1]: Starting Create netns directory...
Dec 11 09:09:53 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 11 09:09:53 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 11 09:09:53 compute-1 systemd[1]: Finished Create netns directory.
Dec 11 09:09:53 compute-1 sudo[62189]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:54 compute-1 python3.9[62418]: ansible-ansible.builtin.service_facts Invoked
Dec 11 09:09:54 compute-1 network[62435]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 11 09:09:54 compute-1 network[62436]: 'network-scripts' will be removed from distribution in near future.
Dec 11 09:09:54 compute-1 network[62437]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 11 09:09:57 compute-1 sudo[62697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mduhysvaasnabccjbvjvfdnzxktgygpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444197.365403-588-156646825108110/AnsiballZ_systemd.py'
Dec 11 09:09:57 compute-1 sudo[62697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:57 compute-1 python3.9[62699]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:58 compute-1 systemd[1]: Reloading.
Dec 11 09:09:58 compute-1 systemd-rc-local-generator[62724]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:09:58 compute-1 systemd-sysv-generator[62729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:09:58 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 11 09:09:58 compute-1 iptables.init[62739]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 11 09:09:58 compute-1 iptables.init[62739]: iptables: Flushing firewall rules: [  OK  ]
Dec 11 09:09:58 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Dec 11 09:09:58 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 11 09:09:58 compute-1 sudo[62697]: pam_unix(sudo:session): session closed for user root
Dec 11 09:09:59 compute-1 sudo[62933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqfgzfnqbansbjnuiojgiobyqhdnsydm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444198.8050854-588-244823089900674/AnsiballZ_systemd.py'
Dec 11 09:09:59 compute-1 sudo[62933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:09:59 compute-1 python3.9[62935]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:09:59 compute-1 sudo[62933]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:00 compute-1 sudo[63087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmnjpkszyqvldnhicrvzadbwpocubgfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444199.7840285-636-65576739804849/AnsiballZ_systemd.py'
Dec 11 09:10:00 compute-1 sudo[63087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:00 compute-1 python3.9[63089]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:10:00 compute-1 systemd[1]: Reloading.
Dec 11 09:10:00 compute-1 systemd-sysv-generator[63121]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:10:00 compute-1 systemd-rc-local-generator[63118]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:10:00 compute-1 systemd[1]: Starting Netfilter Tables...
Dec 11 09:10:00 compute-1 systemd[1]: Finished Netfilter Tables.
Dec 11 09:10:00 compute-1 sudo[63087]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:01 compute-1 sudo[63278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mziyusmrcplmatnzuijecmpgnmqhsixo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444200.867459-660-74989233704835/AnsiballZ_command.py'
Dec 11 09:10:01 compute-1 sudo[63278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:01 compute-1 python3.9[63280]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:01 compute-1 sudo[63278]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:02 compute-1 sudo[63431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azgyuojkshwiiepsqvxlfdlbmkiguypw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444202.0452747-702-281051283790313/AnsiballZ_stat.py'
Dec 11 09:10:02 compute-1 sudo[63431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:02 compute-1 python3.9[63433]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:02 compute-1 sudo[63431]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:02 compute-1 sudo[63556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smjmdfjrtmdbbikziknymiegbesjstvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444202.0452747-702-281051283790313/AnsiballZ_copy.py'
Dec 11 09:10:02 compute-1 sudo[63556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:03 compute-1 python3.9[63558]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444202.0452747-702-281051283790313/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:03 compute-1 sudo[63556]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:03 compute-1 sudo[63709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydsdtsoyamxgaxosnfyhtwassnejspx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444203.3227463-747-7755759836756/AnsiballZ_systemd.py'
Dec 11 09:10:03 compute-1 sudo[63709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:03 compute-1 python3.9[63711]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:10:03 compute-1 systemd[1]: Reloading OpenSSH server daemon...
Dec 11 09:10:03 compute-1 systemd[1]: Reloaded OpenSSH server daemon.
Dec 11 09:10:03 compute-1 sshd[1004]: Received SIGHUP; restarting.
Dec 11 09:10:03 compute-1 sshd[1004]: Server listening on 0.0.0.0 port 22.
Dec 11 09:10:03 compute-1 sshd[1004]: Server listening on :: port 22.
Dec 11 09:10:03 compute-1 sudo[63709]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:04 compute-1 sudo[63865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhocrekdwpcchssukujlmrbfabtztdyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444204.2117517-771-145576818422005/AnsiballZ_file.py'
Dec 11 09:10:04 compute-1 sudo[63865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:04 compute-1 python3.9[63867]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:04 compute-1 sudo[63865]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:05 compute-1 sudo[64017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpqcslfduripanjwqbbfvgrvbqosbpfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444204.846529-795-57411211339649/AnsiballZ_stat.py'
Dec 11 09:10:05 compute-1 sudo[64017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:05 compute-1 python3.9[64019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:05 compute-1 sudo[64017]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:05 compute-1 sudo[64140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrbjeuedrynitsfxvalddusorcshclub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444204.846529-795-57411211339649/AnsiballZ_copy.py'
Dec 11 09:10:05 compute-1 sudo[64140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:05 compute-1 python3.9[64142]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444204.846529-795-57411211339649/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:05 compute-1 sudo[64140]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:06 compute-1 sudo[64292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guflihzpocelurewslecgjigotkixoxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444206.329559-849-29550795375871/AnsiballZ_timezone.py'
Dec 11 09:10:06 compute-1 sudo[64292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:06 compute-1 python3.9[64294]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 11 09:10:06 compute-1 systemd[1]: Starting Time & Date Service...
Dec 11 09:10:06 compute-1 systemd[1]: Started Time & Date Service.
Dec 11 09:10:07 compute-1 sudo[64292]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:07 compute-1 sudo[64448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmkwgzzvaylmnyrziqgkwubuvkaykfij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444207.3593223-876-14095979814363/AnsiballZ_file.py'
Dec 11 09:10:07 compute-1 sudo[64448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:07 compute-1 python3.9[64450]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:07 compute-1 sudo[64448]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:08 compute-1 sudo[64600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgyxwcoshpeyqehixuovjunidvlzxtut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444208.0991743-900-45323410774262/AnsiballZ_stat.py'
Dec 11 09:10:08 compute-1 sudo[64600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:08 compute-1 python3.9[64602]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:08 compute-1 sudo[64600]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:08 compute-1 sudo[64723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cttcjgdivmeuzxrdisxfxfgnulrcmdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444208.0991743-900-45323410774262/AnsiballZ_copy.py'
Dec 11 09:10:08 compute-1 sudo[64723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:09 compute-1 python3.9[64725]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444208.0991743-900-45323410774262/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:09 compute-1 sudo[64723]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:09 compute-1 sudo[64875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchpmqhvabplddeojsdmnxypkonlygkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444209.3954735-945-235730533613379/AnsiballZ_stat.py'
Dec 11 09:10:09 compute-1 sudo[64875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:09 compute-1 python3.9[64877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:09 compute-1 sudo[64875]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:10 compute-1 sudo[64998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmkolzulwunlcqvrpkhuidnimhjhfkkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444209.3954735-945-235730533613379/AnsiballZ_copy.py'
Dec 11 09:10:10 compute-1 sudo[64998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:10 compute-1 python3.9[65000]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765444209.3954735-945-235730533613379/.source.yaml _original_basename=.1js01f1w follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:10 compute-1 sudo[64998]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:10 compute-1 sudo[65150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cezmofhywynjnxufwhvchvstnrnzzrtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444210.6384776-990-192651185826624/AnsiballZ_stat.py'
Dec 11 09:10:10 compute-1 sudo[65150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:11 compute-1 python3.9[65152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:11 compute-1 sudo[65150]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:11 compute-1 sudo[65273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiftlozsuiysgvrylecqgmuobocnopkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444210.6384776-990-192651185826624/AnsiballZ_copy.py'
Dec 11 09:10:11 compute-1 sudo[65273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:11 compute-1 python3.9[65275]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444210.6384776-990-192651185826624/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:11 compute-1 sudo[65273]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:12 compute-1 sudo[65425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvcqnjttjudcirijyxoguivukndjcrov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444212.4799259-1035-2487076720334/AnsiballZ_command.py'
Dec 11 09:10:12 compute-1 sudo[65425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:12 compute-1 python3.9[65427]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:12 compute-1 sudo[65425]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:13 compute-1 sudo[65578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjeeowbuirshrjcchizthvprncgmrbwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444213.14643-1059-194612741179462/AnsiballZ_command.py'
Dec 11 09:10:13 compute-1 sudo[65578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:13 compute-1 python3.9[65580]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:13 compute-1 sudo[65578]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:14 compute-1 sudo[65731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shhskrrstadounjizlqnnxggtaoxdwlm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765444214.0173817-1083-237402823244048/AnsiballZ_edpm_nftables_from_files.py'
Dec 11 09:10:14 compute-1 sudo[65731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:14 compute-1 python3[65733]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 11 09:10:14 compute-1 sudo[65731]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:15 compute-1 sudo[65883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwvidiiikisoxlrcstydzwiaijzdkzxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444214.8329563-1107-44230150235343/AnsiballZ_stat.py'
Dec 11 09:10:15 compute-1 sudo[65883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:15 compute-1 python3.9[65885]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:15 compute-1 sudo[65883]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:15 compute-1 sudo[66006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkbfrgxzueponohiemppzlebdnxaetsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444214.8329563-1107-44230150235343/AnsiballZ_copy.py'
Dec 11 09:10:15 compute-1 sudo[66006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:15 compute-1 python3.9[66008]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444214.8329563-1107-44230150235343/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:15 compute-1 sudo[66006]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:16 compute-1 sudo[66158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhyolldylwesyedddvvdzbsexauzmttv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444216.1939118-1152-16167173255680/AnsiballZ_stat.py'
Dec 11 09:10:16 compute-1 sudo[66158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:16 compute-1 python3.9[66160]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:16 compute-1 sudo[66158]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:16 compute-1 sudo[66281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xukmzhsxotxgsasnrppflwjaztbofecx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444216.1939118-1152-16167173255680/AnsiballZ_copy.py'
Dec 11 09:10:16 compute-1 sudo[66281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:17 compute-1 python3.9[66283]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444216.1939118-1152-16167173255680/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:17 compute-1 sudo[66281]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:17 compute-1 sudo[66433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqpjyprejbrpratjvekpbcnrtjnmwus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444217.5413635-1197-146390895714264/AnsiballZ_stat.py'
Dec 11 09:10:17 compute-1 sudo[66433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:17 compute-1 python3.9[66435]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:18 compute-1 sudo[66433]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:18 compute-1 sudo[66556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvriaiiapmvtxnxoafiqigjcjzxnyuog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444217.5413635-1197-146390895714264/AnsiballZ_copy.py'
Dec 11 09:10:18 compute-1 sudo[66556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:18 compute-1 python3.9[66558]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444217.5413635-1197-146390895714264/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:18 compute-1 sudo[66556]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:19 compute-1 sudo[66708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpfcgwycszrxqguyclvgbiyaakuwccwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444218.845631-1242-265396694350760/AnsiballZ_stat.py'
Dec 11 09:10:19 compute-1 sudo[66708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:19 compute-1 python3.9[66710]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:19 compute-1 sudo[66708]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:19 compute-1 sudo[66831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivxsvqprjrbyyxbepvhadndwcxnnovfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444218.845631-1242-265396694350760/AnsiballZ_copy.py'
Dec 11 09:10:19 compute-1 sudo[66831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:20 compute-1 python3.9[66833]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444218.845631-1242-265396694350760/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:20 compute-1 sudo[66831]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:20 compute-1 sudo[66983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxzxmraoqbuidmprziboikyhprqdnwwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444220.2624314-1287-198132141797935/AnsiballZ_stat.py'
Dec 11 09:10:20 compute-1 sudo[66983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:20 compute-1 python3.9[66985]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 11 09:10:20 compute-1 sudo[66983]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:21 compute-1 sudo[67106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lypklmcbbmaxbokgapuabslywclhiddw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444220.2624314-1287-198132141797935/AnsiballZ_copy.py'
Dec 11 09:10:21 compute-1 sudo[67106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:21 compute-1 python3.9[67108]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765444220.2624314-1287-198132141797935/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:21 compute-1 sudo[67106]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:21 compute-1 sudo[67258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqxithftaoarphdafwlegmemajqqlgjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444221.6739733-1332-146710641939078/AnsiballZ_file.py'
Dec 11 09:10:21 compute-1 sudo[67258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:22 compute-1 python3.9[67260]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:22 compute-1 sudo[67258]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:23 compute-1 sudo[67411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wclcapoiiwvqsyeidswpvvvxrtaftvbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444222.93053-1356-13377382677732/AnsiballZ_command.py'
Dec 11 09:10:23 compute-1 sudo[67411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:23 compute-1 python3.9[67413]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:23 compute-1 sudo[67411]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:24 compute-1 sudo[67570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbzhbgbsrozyseovlpttcyjkurocoln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444223.666046-1380-1092494076096/AnsiballZ_blockinfile.py'
Dec 11 09:10:24 compute-1 sudo[67570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:24 compute-1 python3.9[67572]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:24 compute-1 sudo[67570]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:24 compute-1 sudo[67723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqwqyalahnsthoowkxrzybccdwsncdhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444224.6296005-1407-128909894114544/AnsiballZ_file.py'
Dec 11 09:10:24 compute-1 sudo[67723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:25 compute-1 python3.9[67725]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:25 compute-1 sudo[67723]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:25 compute-1 sudo[67875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzwwqtkmtulfppijasiaavwzqbzgfmsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444225.2393863-1407-234739637685681/AnsiballZ_file.py'
Dec 11 09:10:25 compute-1 sudo[67875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:25 compute-1 python3.9[67877]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:25 compute-1 sudo[67875]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:26 compute-1 sudo[68027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaclqmtrnpessuifwtbadbukkipshmdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444226.0502658-1452-76465033098859/AnsiballZ_mount.py'
Dec 11 09:10:26 compute-1 sudo[68027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:26 compute-1 python3.9[68029]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 11 09:10:26 compute-1 sudo[68027]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:27 compute-1 sudo[68180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aehnfedaghnplehtimvsjlayhpsjnpap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444226.953877-1452-101630573949821/AnsiballZ_mount.py'
Dec 11 09:10:27 compute-1 sudo[68180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:27 compute-1 python3.9[68182]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 11 09:10:27 compute-1 sudo[68180]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:28 compute-1 sshd-session[58975]: Connection closed by 192.168.122.30 port 52704
Dec 11 09:10:28 compute-1 sshd-session[58972]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:10:28 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Dec 11 09:10:28 compute-1 systemd[1]: session-15.scope: Consumed 35.151s CPU time.
Dec 11 09:10:28 compute-1 systemd-logind[791]: Session 15 logged out. Waiting for processes to exit.
Dec 11 09:10:28 compute-1 systemd-logind[791]: Removed session 15.
Dec 11 09:10:33 compute-1 sshd-session[68208]: Accepted publickey for zuul from 192.168.122.30 port 50578 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:10:33 compute-1 systemd-logind[791]: New session 16 of user zuul.
Dec 11 09:10:33 compute-1 systemd[1]: Started Session 16 of User zuul.
Dec 11 09:10:33 compute-1 sshd-session[68208]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:10:34 compute-1 sudo[68361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yesmtakwmhqkhwluspywuozjdkkaagqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444233.857685-19-200599737803293/AnsiballZ_tempfile.py'
Dec 11 09:10:34 compute-1 sudo[68361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:34 compute-1 python3.9[68363]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 11 09:10:34 compute-1 sudo[68361]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:35 compute-1 sudo[68513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxiadgodpjwkglszzfxdafngvyyugiof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444234.8034804-55-216089318459338/AnsiballZ_stat.py'
Dec 11 09:10:35 compute-1 sudo[68513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:35 compute-1 python3.9[68515]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:10:35 compute-1 sudo[68513]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:36 compute-1 sudo[68665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pachcukypxjmouozsgqtluhlwevbcyut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444235.8216128-85-234492179955455/AnsiballZ_setup.py'
Dec 11 09:10:36 compute-1 sudo[68665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:36 compute-1 python3.9[68667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:10:36 compute-1 sudo[68665]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:37 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 11 09:10:37 compute-1 sudo[68819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddyumvaliasspmjcweaxtldqkddwecnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444236.9876063-110-143449165226083/AnsiballZ_blockinfile.py'
Dec 11 09:10:37 compute-1 sudo[68819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:37 compute-1 python3.9[68821]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0qJprbeWE9gzziBi8iIuZ5/k4Y6VfsPefjRYND6OZTash940tra+OExym0WgX87tl0p8X5af7e5kx9ApSRGaDIhv1rHPZ/IiVWkI2kY4RRTIVBCxGHLfXtRDD8GaQmG8fFQddHbPFCjIrFu10YCvPF16Y/Mt5nOPq9lYkZmzorw5wMqD/xgf9v/jVXY4cCyAfeF3XiDAXmPuUspWmwPvdtRAUxIhnReR7164EpwboPcrrTUPSgIDy/z0IM0qJgoQ5hS0fQs87Lc2HEmz2jJE1uejxE+/SCuhA7bx2aqd0z6ijsdztI0+Ysu7VJZSTCPCK80fl+1QGZ3bcudNTf1ibWIpUBfFF5KKXvx9lt946bSY87rBt2xRrhZmMtEWNHnsJlN2tx2VUFi5u1V3mlluQv2KcMkQ+DwGqSZNPpnMabP6sjsRCedR5UNCGpkAlQnEnZjbOKbpjaFMHebuCOZnfOFv9p8gP2mks6D+rd9XoG4A1Q50GxFGJOb27xFQj+PU=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAKW1m3VuEqNuliezDXOtl5vBx95kHlieh9m8cBF4J2o
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAm4ND7jopGSwcS22rsdD1j27W4muhBhl+dlyzQbKJ8MgLCf5CxN7Ilf4gHl9+Edxlr9w64sx8AeNQLQfGb3frU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuuKiq2V+he/ThrupEg3hiw62Bbz9UDb8TfczhWXsP9EngwA1Jx/TgF8INdtvkM8aT2r29ZIj28UjCHuUomkDdsSnN+WYIg4rfwHhoSKEjqK2xAsoN4ad3ZHz+3NzrP80ZibBpikxrE5Qa8M77zrDVCBgV/IX9I7NJygz+xwc6+IqBXOrUx3SXRNm+puno1NmCR355gAqooVxq+pIUeFp0ON5zM29qtrmQz9gTnTiTsOiUp0yzsLQnImSKAlgYndcILPzXzIy1tFDPaCuZdYzkt6BOw/8+fUG4gOyU6r3utbIjMYM+fQNNM/quSFClwXT7oiIBe1C7LalTUlkeI9YRJf/10RXdv+DvLzNgWlDFqQu6kBT5EzIiVVzrX/osRYH29QFP0Gt8Js/MaqtQMycfQnBVr+L5lAIZfgXbLq2JHM0jsdAYqBxxXYlMffZlmFlNwQisoxilTLioxz0Vkbgaqn/Tooh2xUViJZHWmklWtkd1xAz1PIouEjHE6++Ayfk=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAlhtIw9BjEZYtqGZQGR+gBrkihlR53uEdNIOYiXwd9H
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEyaxw/KDB0GRXzjHyE2amKF8n2drpoIGhmK9B0c/hWAAxCsGGS8XxxI4TjpSvDMNoz37cKcT3e8SbrO95NheEU=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfGmKdzbx9cziI2UInDx8Z8fdzrd4qGuM081+Q0LHCe/Uk3h1ehAI1eUAf55tKHkk7aA7WMefWDMJXZJD8OlLu0uwUt/qwmp8vU7Sa7LCTdMAV/+urhPVNiqLPyPRwLzQ8ooHPQVSd3DDUkbDTAiKPSwO6O41joP9vi3IiLQHV1ia4HLL6Xid6QQ2PXQaEvs9MNuBFmnmLE+0TyGk8DHTsTUDMWJIBOkUdsR3XFYsT28eLJVM6jpVEok+DQtuxUXUBhExRj044h8jLEdduzFJ8bXYkarcYE6BCGWFuxu6ukIWhN6vUleOQraHHlY1T6+I3oqdV8R1aIFg88wb+2AH6sICyeeMqDKylfxNM1h3YfvBibBqUygE6fcOqd9PQ7itlcqq1fyAJCXf1pORVUCsOF0hMoq8KULzeXqK6YyY1XmhUHan5BJk7yuRW3a3opcDHyU/A8Oo/SDcTsH8KPdScZE/WMcfFH3l5hWSguT84BC9B3+EheVHGGOPoCbX+tSs=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILR/pY0/ZxqiLi0s+th9yq8tTKO1MwQXuTHHzr6rD8dL
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCCgLzqUdyzTsV/93pYNa7b//8jw8BJ7ijBVPNT1InrXl2EFJm3ZdwP+GHug2pMLz0UjwWUesGsid8zCMbx1Gto=
                                             create=True mode=0644 path=/tmp/ansible.bjc80y7z state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:37 compute-1 sudo[68819]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:38 compute-1 sudo[68971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdbcnfuytcdopanmpwwigeahlfxssxbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444237.803184-134-185526449716011/AnsiballZ_command.py'
Dec 11 09:10:38 compute-1 sudo[68971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:38 compute-1 python3.9[68973]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.bjc80y7z' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:38 compute-1 sudo[68971]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:39 compute-1 sudo[69125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcktrfmbdtqwymkrwvpbstuwhttqvxya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444238.657496-158-57606178997835/AnsiballZ_file.py'
Dec 11 09:10:39 compute-1 sudo[69125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:39 compute-1 python3.9[69127]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.bjc80y7z state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:39 compute-1 sudo[69125]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:39 compute-1 sshd-session[68211]: Connection closed by 192.168.122.30 port 50578
Dec 11 09:10:39 compute-1 sshd-session[68208]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:10:39 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Dec 11 09:10:39 compute-1 systemd[1]: session-16.scope: Consumed 3.365s CPU time.
Dec 11 09:10:39 compute-1 systemd-logind[791]: Session 16 logged out. Waiting for processes to exit.
Dec 11 09:10:39 compute-1 systemd-logind[791]: Removed session 16.
Dec 11 09:10:46 compute-1 sshd-session[69152]: Accepted publickey for zuul from 192.168.122.30 port 47804 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:10:46 compute-1 systemd-logind[791]: New session 17 of user zuul.
Dec 11 09:10:46 compute-1 systemd[1]: Started Session 17 of User zuul.
Dec 11 09:10:46 compute-1 sshd-session[69152]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:10:47 compute-1 python3.9[69305]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:10:48 compute-1 sudo[69459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bggrvhdlizoyduhxpabgchonhishuqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444247.5378778-57-166353138997154/AnsiballZ_systemd.py'
Dec 11 09:10:48 compute-1 sudo[69459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:48 compute-1 python3.9[69461]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 11 09:10:48 compute-1 sudo[69459]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:48 compute-1 sudo[69613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngghqclsyozugeebhcvyvqdbuccxjtli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444248.66095-81-197314315687526/AnsiballZ_systemd.py'
Dec 11 09:10:48 compute-1 sudo[69613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:49 compute-1 python3.9[69615]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 11 09:10:49 compute-1 sudo[69613]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:49 compute-1 sudo[69766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ashhhldyzenjqomignnztaqodoogfaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444249.5129068-108-95490358091934/AnsiballZ_command.py'
Dec 11 09:10:49 compute-1 sudo[69766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:50 compute-1 python3.9[69768]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:50 compute-1 sudo[69766]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:51 compute-1 sudo[69919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htmvgymybwpfibljxoqccvregmhyguqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444251.1406856-132-214121857596550/AnsiballZ_stat.py'
Dec 11 09:10:51 compute-1 sudo[69919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:51 compute-1 python3.9[69921]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:10:51 compute-1 sudo[69919]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:52 compute-1 sudo[70073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abfbfzxyejjcylfktmaqnhgryqiqkwpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444251.9871156-156-139829071409080/AnsiballZ_command.py'
Dec 11 09:10:52 compute-1 sudo[70073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:52 compute-1 python3.9[70075]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:10:52 compute-1 sudo[70073]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:53 compute-1 sudo[70228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lshkzgbgzcyzqimxsqecyaugfcuvhymu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444252.7630255-180-125265735255944/AnsiballZ_file.py'
Dec 11 09:10:53 compute-1 sudo[70228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:10:53 compute-1 python3.9[70230]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:10:53 compute-1 sudo[70228]: pam_unix(sudo:session): session closed for user root
Dec 11 09:10:53 compute-1 sshd-session[69155]: Connection closed by 192.168.122.30 port 47804
Dec 11 09:10:53 compute-1 sshd-session[69152]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:10:53 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Dec 11 09:10:53 compute-1 systemd[1]: session-17.scope: Consumed 4.588s CPU time.
Dec 11 09:10:53 compute-1 systemd-logind[791]: Session 17 logged out. Waiting for processes to exit.
Dec 11 09:10:53 compute-1 systemd-logind[791]: Removed session 17.
Dec 11 09:10:58 compute-1 sshd-session[70255]: Accepted publickey for zuul from 192.168.122.30 port 58576 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:10:58 compute-1 systemd-logind[791]: New session 18 of user zuul.
Dec 11 09:10:59 compute-1 systemd[1]: Started Session 18 of User zuul.
Dec 11 09:10:59 compute-1 sshd-session[70255]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:11:00 compute-1 python3.9[70409]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:11:01 compute-1 sudo[70563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlkljfpvmrylcwmnekkyulvcdrazukny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444260.813423-63-124882798350749/AnsiballZ_setup.py'
Dec 11 09:11:01 compute-1 sudo[70563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:01 compute-1 python3.9[70565]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 11 09:11:01 compute-1 sudo[70563]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:02 compute-1 sudo[70647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiuisxytwvjwnecjbfhyhokzzyckkgwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765444260.813423-63-124882798350749/AnsiballZ_dnf.py'
Dec 11 09:11:02 compute-1 sudo[70647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:02 compute-1 python3.9[70649]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 11 09:11:03 compute-1 sudo[70647]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:05 compute-1 python3.9[70800]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:07 compute-1 python3.9[70951]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 11 09:11:08 compute-1 python3.9[71101]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:11:08 compute-1 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 11 09:11:09 compute-1 python3.9[71252]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 11 09:11:09 compute-1 sshd-session[70258]: Connection closed by 192.168.122.30 port 58576
Dec 11 09:11:09 compute-1 sshd-session[70255]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:11:09 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Dec 11 09:11:09 compute-1 systemd[1]: session-18.scope: Consumed 6.067s CPU time.
Dec 11 09:11:09 compute-1 systemd-logind[791]: Session 18 logged out. Waiting for processes to exit.
Dec 11 09:11:09 compute-1 systemd-logind[791]: Removed session 18.
Dec 11 09:11:18 compute-1 sshd-session[71277]: Accepted publickey for zuul from 38.102.83.179 port 57032 ssh2: RSA SHA256:Y1EkKFCM2AxcqFrasoatI/7noXQ4Hq5V3b6Fo5AKQhU
Dec 11 09:11:18 compute-1 systemd-logind[791]: New session 19 of user zuul.
Dec 11 09:11:18 compute-1 systemd[1]: Started Session 19 of User zuul.
Dec 11 09:11:18 compute-1 sshd-session[71277]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:11:18 compute-1 sudo[71353]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdszjbxxzolmisswmqttkbdftjwzmtxr ; /usr/bin/python3'
Dec 11 09:11:18 compute-1 sudo[71353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:18 compute-1 useradd[71357]: new group: name=ceph-admin, GID=42478
Dec 11 09:11:18 compute-1 useradd[71357]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 11 09:11:19 compute-1 sudo[71353]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:19 compute-1 sudo[71439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reckruhrjpuwxbtwblokdeqihxuxyiho ; /usr/bin/python3'
Dec 11 09:11:19 compute-1 sudo[71439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:19 compute-1 sudo[71439]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:19 compute-1 sudo[71512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipdmuinwbfvafzgnnpmehxzqzbrcgrbv ; /usr/bin/python3'
Dec 11 09:11:19 compute-1 sudo[71512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:20 compute-1 sudo[71512]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:20 compute-1 sudo[71562]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anqyfnnfbgduoeoacjgbufzsaxkhqkxh ; /usr/bin/python3'
Dec 11 09:11:20 compute-1 sudo[71562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:20 compute-1 sudo[71562]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:20 compute-1 sudo[71588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-culakdujifcddpyudsivntkpsakggyil ; /usr/bin/python3'
Dec 11 09:11:20 compute-1 sudo[71588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:20 compute-1 sudo[71588]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:21 compute-1 sudo[71614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssiybxyrjxdyuvpviswhawtczypwekwb ; /usr/bin/python3'
Dec 11 09:11:21 compute-1 sudo[71614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:21 compute-1 sudo[71614]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:21 compute-1 sudo[71640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehbzajokdzkulowctomhcjmlbieawqlh ; /usr/bin/python3'
Dec 11 09:11:21 compute-1 sudo[71640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:21 compute-1 sudo[71640]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:22 compute-1 sudo[71718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baqncrdzntyutemmjbzoaaexuwdtuudd ; /usr/bin/python3'
Dec 11 09:11:22 compute-1 sudo[71718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:22 compute-1 sudo[71718]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:22 compute-1 sudo[71791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmzerwmjwtjfviffejswrvsgkweovdhv ; /usr/bin/python3'
Dec 11 09:11:22 compute-1 sudo[71791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:22 compute-1 sudo[71791]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:23 compute-1 sudo[71893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uowuqmqipxifmoxqxbhunsauuztxvnot ; /usr/bin/python3'
Dec 11 09:11:23 compute-1 sudo[71893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:23 compute-1 sudo[71893]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:23 compute-1 sudo[71966]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnpteafaztekpynhhedwqkvljoshafjk ; /usr/bin/python3'
Dec 11 09:11:23 compute-1 sudo[71966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:23 compute-1 sudo[71966]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:24 compute-1 sudo[72016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewuljsywilwzqxahutelmldravfumchb ; /usr/bin/python3'
Dec 11 09:11:24 compute-1 sudo[72016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:24 compute-1 python3[72018]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:11:25 compute-1 sudo[72016]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:26 compute-1 sudo[72112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foskuyuhepcxlhyejfjxyfwrjmhjghnf ; /usr/bin/python3'
Dec 11 09:11:26 compute-1 sudo[72112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:26 compute-1 python3[72114]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 11 09:11:27 compute-1 sudo[72112]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:28 compute-1 sudo[72139]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emtplhdnvudtmcqrfwubxfdhhonhplnr ; /usr/bin/python3'
Dec 11 09:11:28 compute-1 sudo[72139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:28 compute-1 python3[72141]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 11 09:11:28 compute-1 sudo[72139]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:28 compute-1 sudo[72165]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvpociiukjaofeulvcxpcaulmkepiawm ; /usr/bin/python3'
Dec 11 09:11:28 compute-1 sudo[72165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:28 compute-1 python3[72167]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:28 compute-1 kernel: loop: module loaded
Dec 11 09:11:28 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Dec 11 09:11:28 compute-1 sudo[72165]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:29 compute-1 sudo[72200]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlmigthzofbtwtnioombxrddiczbomxo ; /usr/bin/python3'
Dec 11 09:11:29 compute-1 sudo[72200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:29 compute-1 python3[72202]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:11:29 compute-1 lvm[72205]: PV /dev/loop3 not used.
Dec 11 09:11:29 compute-1 lvm[72214]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:11:29 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 11 09:11:29 compute-1 lvm[72216]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 11 09:11:29 compute-1 sudo[72200]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:29 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 11 09:11:29 compute-1 sudo[72292]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiakwkjmuydpddohibiqvvaupktoixxt ; /usr/bin/python3'
Dec 11 09:11:29 compute-1 sudo[72292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:30 compute-1 python3[72294]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 11 09:11:30 compute-1 chronyd[58491]: Selected source 23.159.16.194 (pool.ntp.org)
Dec 11 09:11:30 compute-1 sudo[72292]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:30 compute-1 sudo[72365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdhpcafsulwzjbbyfbqyzlfybfpsxvpz ; /usr/bin/python3'
Dec 11 09:11:30 compute-1 sudo[72365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:30 compute-1 python3[72367]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765444289.786457-36753-230923684353474/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 11 09:11:30 compute-1 sudo[72365]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:30 compute-1 sudo[72415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnryiaxxtvxzpdgsvdahpywbqqupwtir ; /usr/bin/python3'
Dec 11 09:11:30 compute-1 sudo[72415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:11:31 compute-1 python3[72417]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 11 09:11:31 compute-1 systemd[1]: Reloading.
Dec 11 09:11:31 compute-1 systemd-rc-local-generator[72444]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:11:31 compute-1 systemd-sysv-generator[72448]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:11:31 compute-1 systemd[1]: Starting Ceph OSD losetup...
Dec 11 09:11:31 compute-1 bash[72457]: /dev/loop3: [64513]:4327943 (/var/lib/ceph-osd-0.img)
Dec 11 09:11:31 compute-1 systemd[1]: Finished Ceph OSD losetup.
Dec 11 09:11:31 compute-1 lvm[72458]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:11:31 compute-1 lvm[72458]: VG ceph_vg0 finished
Dec 11 09:11:31 compute-1 sudo[72415]: pam_unix(sudo:session): session closed for user root
Dec 11 09:11:33 compute-1 python3[72482]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 11 09:13:21 compute-1 sshd-session[72526]: Accepted publickey for ceph-admin from 192.168.122.100 port 34690 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:21 compute-1 systemd-logind[791]: New session 20 of user ceph-admin.
Dec 11 09:13:21 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Dec 11 09:13:21 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 11 09:13:21 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 11 09:13:21 compute-1 systemd[1]: Starting User Manager for UID 42477...
Dec 11 09:13:21 compute-1 systemd[72530]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:21 compute-1 sshd-session[72536]: Accepted publickey for ceph-admin from 192.168.122.100 port 34702 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:21 compute-1 systemd-logind[791]: New session 22 of user ceph-admin.
Dec 11 09:13:21 compute-1 systemd[72530]: Queued start job for default target Main User Target.
Dec 11 09:13:21 compute-1 systemd[72530]: Created slice User Application Slice.
Dec 11 09:13:21 compute-1 systemd[72530]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:13:21 compute-1 systemd[72530]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 09:13:21 compute-1 systemd[72530]: Reached target Paths.
Dec 11 09:13:21 compute-1 systemd[72530]: Reached target Timers.
Dec 11 09:13:21 compute-1 systemd[72530]: Starting D-Bus User Message Bus Socket...
Dec 11 09:13:21 compute-1 systemd[72530]: Starting Create User's Volatile Files and Directories...
Dec 11 09:13:21 compute-1 systemd[72530]: Listening on D-Bus User Message Bus Socket.
Dec 11 09:13:21 compute-1 systemd[72530]: Reached target Sockets.
Dec 11 09:13:21 compute-1 systemd[72530]: Finished Create User's Volatile Files and Directories.
Dec 11 09:13:21 compute-1 systemd[72530]: Reached target Basic System.
Dec 11 09:13:21 compute-1 systemd[72530]: Reached target Main User Target.
Dec 11 09:13:21 compute-1 systemd[72530]: Startup finished in 139ms.
Dec 11 09:13:21 compute-1 systemd[1]: Started User Manager for UID 42477.
Dec 11 09:13:21 compute-1 systemd[1]: Started Session 20 of User ceph-admin.
Dec 11 09:13:21 compute-1 systemd[1]: Started Session 22 of User ceph-admin.
Dec 11 09:13:21 compute-1 sshd-session[72526]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:21 compute-1 sshd-session[72536]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:21 compute-1 sudo[72551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:21 compute-1 sudo[72551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:21 compute-1 sudo[72551]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:22 compute-1 sshd-session[72576]: Accepted publickey for ceph-admin from 192.168.122.100 port 34710 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:22 compute-1 systemd-logind[791]: New session 23 of user ceph-admin.
Dec 11 09:13:22 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Dec 11 09:13:22 compute-1 sshd-session[72576]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:22 compute-1 sudo[72580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Dec 11 09:13:22 compute-1 sudo[72580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:22 compute-1 sudo[72580]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:22 compute-1 sshd-session[72605]: Accepted publickey for ceph-admin from 192.168.122.100 port 34724 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:22 compute-1 systemd-logind[791]: New session 24 of user ceph-admin.
Dec 11 09:13:22 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Dec 11 09:13:22 compute-1 sshd-session[72605]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:22 compute-1 sudo[72609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 11 09:13:22 compute-1 sudo[72609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:22 compute-1 sudo[72609]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:22 compute-1 sshd-session[72634]: Accepted publickey for ceph-admin from 192.168.122.100 port 34726 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:22 compute-1 systemd-logind[791]: New session 25 of user ceph-admin.
Dec 11 09:13:22 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Dec 11 09:13:22 compute-1 sshd-session[72634]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:22 compute-1 sudo[72638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:22 compute-1 sudo[72638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:22 compute-1 sudo[72638]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:23 compute-1 sshd-session[72663]: Accepted publickey for ceph-admin from 192.168.122.100 port 34742 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:23 compute-1 systemd-logind[791]: New session 26 of user ceph-admin.
Dec 11 09:13:23 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Dec 11 09:13:23 compute-1 sshd-session[72663]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:23 compute-1 sudo[72667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:23 compute-1 sudo[72667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:23 compute-1 sudo[72667]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:23 compute-1 sshd-session[72692]: Accepted publickey for ceph-admin from 192.168.122.100 port 34744 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:23 compute-1 systemd-logind[791]: New session 27 of user ceph-admin.
Dec 11 09:13:23 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Dec 11 09:13:23 compute-1 sshd-session[72692]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:23 compute-1 sudo[72696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 11 09:13:23 compute-1 sudo[72696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:23 compute-1 sudo[72696]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:23 compute-1 sshd-session[72721]: Accepted publickey for ceph-admin from 192.168.122.100 port 34760 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:23 compute-1 systemd-logind[791]: New session 28 of user ceph-admin.
Dec 11 09:13:23 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Dec 11 09:13:23 compute-1 sshd-session[72721]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:23 compute-1 sudo[72725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:23 compute-1 sudo[72725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:23 compute-1 sudo[72725]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:24 compute-1 sshd-session[72750]: Accepted publickey for ceph-admin from 192.168.122.100 port 34770 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:24 compute-1 systemd-logind[791]: New session 29 of user ceph-admin.
Dec 11 09:13:24 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Dec 11 09:13:24 compute-1 sshd-session[72750]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:24 compute-1 sudo[72754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Dec 11 09:13:24 compute-1 sudo[72754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:24 compute-1 sudo[72754]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:24 compute-1 sshd-session[72779]: Accepted publickey for ceph-admin from 192.168.122.100 port 34780 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:24 compute-1 systemd-logind[791]: New session 30 of user ceph-admin.
Dec 11 09:13:24 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Dec 11 09:13:24 compute-1 sshd-session[72779]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:25 compute-1 sshd-session[72806]: Accepted publickey for ceph-admin from 192.168.122.100 port 34784 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:25 compute-1 systemd-logind[791]: New session 31 of user ceph-admin.
Dec 11 09:13:25 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Dec 11 09:13:25 compute-1 sshd-session[72806]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:25 compute-1 sudo[72810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Dec 11 09:13:25 compute-1 sudo[72810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:25 compute-1 sudo[72810]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:25 compute-1 sshd-session[72835]: Accepted publickey for ceph-admin from 192.168.122.100 port 34796 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:13:25 compute-1 systemd-logind[791]: New session 32 of user ceph-admin.
Dec 11 09:13:25 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Dec 11 09:13:25 compute-1 sshd-session[72835]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:13:26 compute-1 sudo[72839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Dec 11 09:13:26 compute-1 sudo[72839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:26 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:26 compute-1 sudo[72839]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:26 compute-1 sudo[72884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:26 compute-1 sudo[72884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:26 compute-1 sudo[72884]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:26 compute-1 sudo[72909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Dec 11 09:13:26 compute-1 sudo[72909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:26 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:26 compute-1 sudo[72909]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:26 compute-1 sudo[72955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:26 compute-1 sudo[72955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:26 compute-1 sudo[72955]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:26 compute-1 sudo[72980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:13:26 compute-1 sudo[72980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:27 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:27 compute-1 sudo[72980]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:27 compute-1 sudo[73040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:27 compute-1 sudo[73040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:27 compute-1 sudo[73040]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:27 compute-1 sudo[73065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:13:27 compute-1 sudo[73065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:27 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:27 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73103 (sysctl)
Dec 11 09:13:27 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 11 09:13:27 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 11 09:13:27 compute-1 sudo[73065]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:27 compute-1 sudo[73125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:27 compute-1 sudo[73125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:27 compute-1 sudo[73125]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:28 compute-1 sudo[73150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:13:28 compute-1 sudo[73150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:28 compute-1 sudo[73150]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:28 compute-1 sudo[73192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:13:28 compute-1 sudo[73192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:28 compute-1 sudo[73192]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:28 compute-1 sudo[73217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- inventory --format=json-pretty --filter-for-batch
Dec 11 09:13:28 compute-1 sudo[73217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:28 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat582286758-merged.mount: Deactivated successfully.
Dec 11 09:13:33 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat582286758-lower\x2dmapped.mount: Deactivated successfully.
Dec 11 09:13:56 compute-1 podman[73279]: 2025-12-11 09:13:56.461766207 +0000 UTC m=+27.735508153 container create 297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:13:56 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 11 09:13:56 compute-1 systemd[1]: Started libpod-conmon-297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f.scope.
Dec 11 09:13:56 compute-1 podman[73279]: 2025-12-11 09:13:56.442612538 +0000 UTC m=+27.716354504 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:13:56 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:13:56 compute-1 podman[73279]: 2025-12-11 09:13:56.57332331 +0000 UTC m=+27.847065286 container init 297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:13:56 compute-1 podman[73279]: 2025-12-11 09:13:56.579694775 +0000 UTC m=+27.853436721 container start 297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gagarin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:13:56 compute-1 podman[73279]: 2025-12-11 09:13:56.583413007 +0000 UTC m=+27.857154963 container attach 297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gagarin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 11 09:13:56 compute-1 sharp_gagarin[73340]: 167 167
Dec 11 09:13:56 compute-1 systemd[1]: libpod-297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f.scope: Deactivated successfully.
Dec 11 09:13:56 compute-1 podman[73346]: 2025-12-11 09:13:56.636633821 +0000 UTC m=+0.028243473 container died 297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gagarin, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 11 09:13:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-f1865213817b2b02316b9931b55e6da3b1d7dc52d8b9fb3094ee3d482ce64789-merged.mount: Deactivated successfully.
Dec 11 09:13:56 compute-1 podman[73346]: 2025-12-11 09:13:56.88360281 +0000 UTC m=+0.275212482 container remove 297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 11 09:13:56 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:56 compute-1 systemd[1]: libpod-conmon-297f760c3f199a9a5e3f9aea9479cc0e669d1ddeb097d3bca71b55ead3e9128f.scope: Deactivated successfully.
Dec 11 09:13:57 compute-1 podman[73368]: 2025-12-11 09:13:57.13307285 +0000 UTC m=+0.060100452 container create 254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:57 compute-1 podman[73368]: 2025-12-11 09:13:57.113938341 +0000 UTC m=+0.040965963 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:13:57 compute-1 systemd[1]: Started libpod-conmon-254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150.scope.
Dec 11 09:13:57 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:13:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431dd06e2c36ca0bf81f2e2e0dd877425c9a9570d3968cd6ac680005abe1e35f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431dd06e2c36ca0bf81f2e2e0dd877425c9a9570d3968cd6ac680005abe1e35f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:13:57 compute-1 podman[73368]: 2025-12-11 09:13:57.281797913 +0000 UTC m=+0.208825535 container init 254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec 11 09:13:57 compute-1 podman[73368]: 2025-12-11 09:13:57.288340813 +0000 UTC m=+0.215368415 container start 254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:13:57 compute-1 podman[73368]: 2025-12-11 09:13:57.291824399 +0000 UTC m=+0.218852061 container attach 254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec 11 09:13:57 compute-1 quirky_colden[73384]: [
Dec 11 09:13:57 compute-1 quirky_colden[73384]:     {
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "available": false,
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "being_replaced": false,
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "ceph_device_lvm": false,
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "lsm_data": {},
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "lvs": [],
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "path": "/dev/sr0",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "rejected_reasons": [
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "Has a FileSystem",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "Insufficient space (<5GB)"
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         ],
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         "sys_api": {
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "actuators": null,
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "device_nodes": [
Dec 11 09:13:57 compute-1 quirky_colden[73384]:                 "sr0"
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             ],
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "devname": "sr0",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "human_readable_size": "482.00 KB",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "id_bus": "ata",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "model": "QEMU DVD-ROM",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "nr_requests": "2",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "parent": "/dev/sr0",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "partitions": {},
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "path": "/dev/sr0",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "removable": "1",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "rev": "2.5+",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "ro": "0",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "rotational": "1",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "sas_address": "",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "sas_device_handle": "",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "scheduler_mode": "mq-deadline",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "sectors": 0,
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "sectorsize": "2048",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "size": 493568.0,
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "support_discard": "2048",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "type": "disk",
Dec 11 09:13:57 compute-1 quirky_colden[73384]:             "vendor": "QEMU"
Dec 11 09:13:57 compute-1 quirky_colden[73384]:         }
Dec 11 09:13:57 compute-1 quirky_colden[73384]:     }
Dec 11 09:13:57 compute-1 quirky_colden[73384]: ]
Dec 11 09:13:58 compute-1 systemd[1]: libpod-254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150.scope: Deactivated successfully.
Dec 11 09:13:58 compute-1 podman[74460]: 2025-12-11 09:13:58.05612739 +0000 UTC m=+0.026922771 container died 254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-431dd06e2c36ca0bf81f2e2e0dd877425c9a9570d3968cd6ac680005abe1e35f-merged.mount: Deactivated successfully.
Dec 11 09:13:58 compute-1 podman[74460]: 2025-12-11 09:13:58.177295978 +0000 UTC m=+0.148091329 container remove 254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_colden, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:13:58 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:13:58 compute-1 systemd[1]: libpod-conmon-254d104e938a8656270d8ca1739673a31a16a841a37d30a2902dd08489805150.scope: Deactivated successfully.
Dec 11 09:13:58 compute-1 sudo[73217]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:58 compute-1 sudo[74475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:13:58 compute-1 sudo[74475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:58 compute-1 sudo[74475]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:58 compute-1 sudo[74500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:13:58 compute-1 sudo[74500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:58 compute-1 sudo[74500]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:58 compute-1 sudo[74525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:58 compute-1 sudo[74525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:58 compute-1 sudo[74525]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:58 compute-1 sudo[74550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:58 compute-1 sudo[74550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:58 compute-1 sudo[74550]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:58 compute-1 sudo[74575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:58 compute-1 sudo[74575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:58 compute-1 sudo[74575]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:58 compute-1 sudo[74623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:58 compute-1 sudo[74623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:58 compute-1 sudo[74623]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:13:59 compute-1 sudo[74648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74648]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:13:59 compute-1 sudo[74673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74673]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:13:59 compute-1 sudo[74698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74698]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:13:59 compute-1 sudo[74723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74723]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:59 compute-1 sudo[74748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74748]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:59 compute-1 sudo[74773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74773]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:59 compute-1 sudo[74798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74798]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:59 compute-1 sudo[74846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74846]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:13:59 compute-1 sudo[74871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74871]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:13:59 compute-1 sudo[74896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74896]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:13:59 compute-1 sudo[74921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74921]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:13:59 compute-1 sudo[74946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74946]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:13:59 compute-1 sudo[74971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74971]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[74996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:13:59 compute-1 sudo[74996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[74996]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[75021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:13:59 compute-1 sudo[75021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[75021]: pam_unix(sudo:session): session closed for user root
Dec 11 09:13:59 compute-1 sudo[75069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:13:59 compute-1 sudo[75069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:13:59 compute-1 sudo[75069]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:14:00 compute-1 sudo[75094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75094]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:14:00 compute-1 sudo[75119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75119]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:14:00 compute-1 sudo[75144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75144]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:14:00 compute-1 sudo[75169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75169]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:14:00 compute-1 sudo[75194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75194]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:14:00 compute-1 sudo[75219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75219]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:14:00 compute-1 sudo[75244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75244]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:14:00 compute-1 sudo[75292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75292]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:14:00 compute-1 sudo[75317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75317]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:14:00 compute-1 sudo[75342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75342]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:00 compute-1 sudo[75367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:00 compute-1 sudo[75367]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:00 compute-1 sudo[75392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:14:00 compute-1 sudo[75392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:14:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:14:01 compute-1 podman[75458]: 2025-12-11 09:14:01.194233195 +0000 UTC m=+0.041120438 container create 4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_johnson, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 11 09:14:01 compute-1 systemd[1]: Started libpod-conmon-4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630.scope.
Dec 11 09:14:01 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:01 compute-1 podman[75458]: 2025-12-11 09:14:01.255557948 +0000 UTC m=+0.102445211 container init 4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:01 compute-1 podman[75458]: 2025-12-11 09:14:01.261724408 +0000 UTC m=+0.108611651 container start 4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_johnson, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:14:01 compute-1 podman[75458]: 2025-12-11 09:14:01.265348217 +0000 UTC m=+0.112235480 container attach 4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:01 compute-1 serene_johnson[75475]: 167 167
Dec 11 09:14:01 compute-1 systemd[1]: libpod-4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630.scope: Deactivated successfully.
Dec 11 09:14:01 compute-1 podman[75458]: 2025-12-11 09:14:01.267476139 +0000 UTC m=+0.114363372 container died 4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec 11 09:14:01 compute-1 podman[75458]: 2025-12-11 09:14:01.175272901 +0000 UTC m=+0.022160234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:01 compute-1 podman[75458]: 2025-12-11 09:14:01.302987889 +0000 UTC m=+0.149875122 container remove 4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_johnson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:01 compute-1 systemd[1]: libpod-conmon-4b49d01cf5e5b1cee6d12ee330e19359133ef7aa9dd2c3df18fc4309fa2f7630.scope: Deactivated successfully.
Dec 11 09:14:01 compute-1 systemd[1]: Reloading.
Dec 11 09:14:01 compute-1 systemd-rc-local-generator[75521]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:01 compute-1 systemd-sysv-generator[75524]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:01 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:14:01 compute-1 systemd[1]: Reloading.
Dec 11 09:14:01 compute-1 systemd-rc-local-generator[75556]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:01 compute-1 systemd-sysv-generator[75559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:01 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Dec 11 09:14:01 compute-1 systemd[1]: Reloading.
Dec 11 09:14:01 compute-1 systemd-rc-local-generator[75594]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:01 compute-1 systemd-sysv-generator[75597]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:02 compute-1 systemd[1]: Reached target Ceph cluster 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:14:02 compute-1 systemd[1]: Reloading.
Dec 11 09:14:02 compute-1 systemd-sysv-generator[75636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:02 compute-1 systemd-rc-local-generator[75632]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:02 compute-1 systemd[1]: Reloading.
Dec 11 09:14:02 compute-1 systemd-rc-local-generator[75672]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:02 compute-1 systemd-sysv-generator[75676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:02 compute-1 systemd[1]: Created slice Slice /system/ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:14:02 compute-1 systemd[1]: Reached target System Time Set.
Dec 11 09:14:02 compute-1 systemd[1]: Reached target System Time Synchronized.
Dec 11 09:14:02 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:14:02 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 11 09:14:02 compute-1 podman[75728]: 2025-12-11 09:14:02.860714235 +0000 UTC m=+0.026477181 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:02 compute-1 podman[75728]: 2025-12-11 09:14:02.966206948 +0000 UTC m=+0.131969874 container create 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230bfa96fa65f25a4d9f257cc45295c1a0719bdd18aa60d1efec0ab4fc484c77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230bfa96fa65f25a4d9f257cc45295c1a0719bdd18aa60d1efec0ab4fc484c77/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:03 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/230bfa96fa65f25a4d9f257cc45295c1a0719bdd18aa60d1efec0ab4fc484c77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:03 compute-1 podman[75728]: 2025-12-11 09:14:03.209046156 +0000 UTC m=+0.374809102 container init 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 11 09:14:03 compute-1 podman[75728]: 2025-12-11 09:14:03.213614049 +0000 UTC m=+0.379376975 container start 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:03 compute-1 bash[75728]: 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd
Dec 11 09:14:03 compute-1 systemd[1]: Started Ceph crash.compute-1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 11 09:14:03 compute-1 sudo[75392]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: 2025-12-11T09:14:03.377+0000 7fccf8028640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: 2025-12-11T09:14:03.377+0000 7fccf8028640 -1 AuthRegistry(0x7fccf00698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: 2025-12-11T09:14:03.378+0000 7fccf8028640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: 2025-12-11T09:14:03.378+0000 7fccf8028640 -1 AuthRegistry(0x7fccf8026ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: 2025-12-11T09:14:03.381+0000 7fccf5d9d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: 2025-12-11T09:14:03.382+0000 7fccf8028640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 11 09:14:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1[75743]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 11 09:14:03 compute-1 sudo[75760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:03 compute-1 sudo[75760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:03 compute-1 sudo[75760]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:03 compute-1 sudo[75785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Dec 11 09:14:03 compute-1 sudo[75785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:03 compute-1 podman[75848]: 2025-12-11 09:14:03.819069579 +0000 UTC m=+0.037081900 container create 4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:03 compute-1 systemd[1]: Started libpod-conmon-4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933.scope.
Dec 11 09:14:03 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:03 compute-1 podman[75848]: 2025-12-11 09:14:03.803434535 +0000 UTC m=+0.021446886 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:03 compute-1 podman[75848]: 2025-12-11 09:14:03.903973548 +0000 UTC m=+0.121985909 container init 4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_elbakyan, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:03 compute-1 podman[75848]: 2025-12-11 09:14:03.911134514 +0000 UTC m=+0.129146845 container start 4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_elbakyan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:03 compute-1 podman[75848]: 2025-12-11 09:14:03.914351402 +0000 UTC m=+0.132363733 container attach 4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_elbakyan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:03 compute-1 admiring_elbakyan[75864]: 167 167
Dec 11 09:14:03 compute-1 systemd[1]: libpod-4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933.scope: Deactivated successfully.
Dec 11 09:14:03 compute-1 conmon[75864]: conmon 4525ec2223fd7d842821 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933.scope/container/memory.events
Dec 11 09:14:03 compute-1 podman[75848]: 2025-12-11 09:14:03.916921945 +0000 UTC m=+0.134934296 container died 4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-e4c4af302ee8d75e711e9bde92786ff45af9b27080562c93c1fd6fcebdc25f55-merged.mount: Deactivated successfully.
Dec 11 09:14:03 compute-1 podman[75848]: 2025-12-11 09:14:03.963766572 +0000 UTC m=+0.181778903 container remove 4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_elbakyan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:03 compute-1 systemd[1]: libpod-conmon-4525ec2223fd7d842821131a216440242a740c4494399f1a1fb7d166190b6933.scope: Deactivated successfully.
Dec 11 09:14:04 compute-1 podman[75890]: 2025-12-11 09:14:04.121581158 +0000 UTC m=+0.044211984 container create c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 11 09:14:04 compute-1 systemd[1]: Started libpod-conmon-c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc.scope.
Dec 11 09:14:04 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d536f39f0bb60feb272763a4899b77b13104d06d182b7c7f44fa56e9c85e1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d536f39f0bb60feb272763a4899b77b13104d06d182b7c7f44fa56e9c85e1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d536f39f0bb60feb272763a4899b77b13104d06d182b7c7f44fa56e9c85e1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d536f39f0bb60feb272763a4899b77b13104d06d182b7c7f44fa56e9c85e1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d536f39f0bb60feb272763a4899b77b13104d06d182b7c7f44fa56e9c85e1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:04 compute-1 podman[75890]: 2025-12-11 09:14:04.101395803 +0000 UTC m=+0.024026639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:04 compute-1 podman[75890]: 2025-12-11 09:14:04.205022991 +0000 UTC m=+0.127653807 container init c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 11 09:14:04 compute-1 podman[75890]: 2025-12-11 09:14:04.213282064 +0000 UTC m=+0.135912880 container start c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_leakey, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:14:04 compute-1 podman[75890]: 2025-12-11 09:14:04.222327876 +0000 UTC m=+0.144958712 container attach c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:04 compute-1 gallant_leakey[75905]: --> passed data devices: 0 physical, 1 LVM
Dec 11 09:14:04 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:04 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:04 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a627ae9c-42f5-4a06-85ec-51173588750e
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:05 compute-1 lvm[75966]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:05 compute-1 lvm[75966]: VG ceph_vg0 finished
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 11 09:14:05 compute-1 gallant_leakey[75905]:  stderr: got monmap epoch 1
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: --> Creating keyring file for osd.0
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 11 09:14:05 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid a627ae9c-42f5-4a06-85ec-51173588750e --setuser ceph --setgroup ceph
Dec 11 09:14:09 compute-1 gallant_leakey[75905]:  stderr: 2025-12-11T09:14:05.804+0000 7f96c60c1740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 11 09:14:09 compute-1 gallant_leakey[75905]:  stderr: 2025-12-11T09:14:06.073+0000 7f96c60c1740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 11 09:14:09 compute-1 gallant_leakey[75905]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 11 09:14:09 compute-1 systemd[1]: libpod-c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc.scope: Deactivated successfully.
Dec 11 09:14:09 compute-1 systemd[1]: libpod-c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc.scope: Consumed 2.022s CPU time.
Dec 11 09:14:09 compute-1 podman[75890]: 2025-12-11 09:14:09.964012374 +0000 UTC m=+5.886643190 container died c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_leakey, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec 11 09:14:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-94d536f39f0bb60feb272763a4899b77b13104d06d182b7c7f44fa56e9c85e1e-merged.mount: Deactivated successfully.
Dec 11 09:14:10 compute-1 podman[75890]: 2025-12-11 09:14:10.013940677 +0000 UTC m=+5.936571493 container remove c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:10 compute-1 systemd[1]: libpod-conmon-c66b16ac7cbaac1c52049dc5aaecf91aec4f799ae7a3d2555f787586312c5bdc.scope: Deactivated successfully.
Dec 11 09:14:10 compute-1 sudo[75785]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:10 compute-1 sudo[76880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:10 compute-1 sudo[76880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:10 compute-1 sudo[76880]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:10 compute-1 sudo[76905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- lvm list --format json
Dec 11 09:14:10 compute-1 sudo[76905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:10 compute-1 podman[76970]: 2025-12-11 09:14:10.581993382 +0000 UTC m=+0.098887404 container create bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:10 compute-1 podman[76970]: 2025-12-11 09:14:10.504263877 +0000 UTC m=+0.021158089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:10 compute-1 systemd[1]: Started libpod-conmon-bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b.scope.
Dec 11 09:14:10 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:10 compute-1 podman[76970]: 2025-12-11 09:14:10.694065216 +0000 UTC m=+0.210959258 container init bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swirles, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:10 compute-1 podman[76970]: 2025-12-11 09:14:10.700603436 +0000 UTC m=+0.217497458 container start bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 11 09:14:10 compute-1 zealous_swirles[76986]: 167 167
Dec 11 09:14:10 compute-1 podman[76970]: 2025-12-11 09:14:10.704583734 +0000 UTC m=+0.221477766 container attach bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swirles, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:10 compute-1 systemd[1]: libpod-bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b.scope: Deactivated successfully.
Dec 11 09:14:10 compute-1 podman[76970]: 2025-12-11 09:14:10.706033719 +0000 UTC m=+0.222927731 container died bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec 11 09:14:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-586a13aeeca3214410611acd06cdfd3d9054754e56631a6e46247d06df013955-merged.mount: Deactivated successfully.
Dec 11 09:14:10 compute-1 podman[76970]: 2025-12-11 09:14:10.927693379 +0000 UTC m=+0.444587401 container remove bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_swirles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:10 compute-1 systemd[1]: libpod-conmon-bc14d2aedef2a7d8c9b3fcfd66baf053cff72479fd07cdfb58c633141c42ac8b.scope: Deactivated successfully.
Dec 11 09:14:11 compute-1 podman[77011]: 2025-12-11 09:14:11.076209306 +0000 UTC m=+0.042259766 container create f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_liskov, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:11 compute-1 systemd[1]: Started libpod-conmon-f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3.scope.
Dec 11 09:14:11 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72aad08a3a2e167bec7c0692c2e0093c8f07df43fca982719a1bee54e697b1cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72aad08a3a2e167bec7c0692c2e0093c8f07df43fca982719a1bee54e697b1cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72aad08a3a2e167bec7c0692c2e0093c8f07df43fca982719a1bee54e697b1cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72aad08a3a2e167bec7c0692c2e0093c8f07df43fca982719a1bee54e697b1cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:11 compute-1 podman[77011]: 2025-12-11 09:14:11.150160238 +0000 UTC m=+0.116210708 container init f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_liskov, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:11 compute-1 podman[77011]: 2025-12-11 09:14:11.059370524 +0000 UTC m=+0.025421014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:11 compute-1 podman[77011]: 2025-12-11 09:14:11.158281877 +0000 UTC m=+0.124332337 container start f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_liskov, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:14:11 compute-1 podman[77011]: 2025-12-11 09:14:11.162840438 +0000 UTC m=+0.128890898 container attach f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_liskov, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:11 compute-1 elegant_liskov[77028]: {
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:     "0": [
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:         {
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "devices": [
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "/dev/loop3"
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             ],
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "lv_name": "ceph_lv0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "lv_size": "21470642176",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=qCo5kP-tLH6-kqjn-bD4y-DeiX-7JC8-3Q1LnK,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a627ae9c-42f5-4a06-85ec-51173588750e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "lv_uuid": "qCo5kP-tLH6-kqjn-bD4y-DeiX-7JC8-3Q1LnK",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "name": "ceph_lv0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "tags": {
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.block_uuid": "qCo5kP-tLH6-kqjn-bD4y-DeiX-7JC8-3Q1LnK",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.cephx_lockbox_secret": "",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.cluster_fsid": "31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.cluster_name": "ceph",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.crush_device_class": "",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.encrypted": "0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.osd_fsid": "a627ae9c-42f5-4a06-85ec-51173588750e",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.osd_id": "0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.type": "block",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.vdo": "0",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:                 "ceph.with_tpm": "0"
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             },
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "type": "block",
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:             "vg_name": "ceph_vg0"
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:         }
Dec 11 09:14:11 compute-1 elegant_liskov[77028]:     ]
Dec 11 09:14:11 compute-1 elegant_liskov[77028]: }
Dec 11 09:14:11 compute-1 systemd[1]: libpod-f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3.scope: Deactivated successfully.
Dec 11 09:14:11 compute-1 podman[77011]: 2025-12-11 09:14:11.491604662 +0000 UTC m=+0.457655122 container died f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_liskov, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:14:11 compute-1 systemd[1]: var-lib-containers-storage-overlay-72aad08a3a2e167bec7c0692c2e0093c8f07df43fca982719a1bee54e697b1cb-merged.mount: Deactivated successfully.
Dec 11 09:14:11 compute-1 podman[77011]: 2025-12-11 09:14:11.61525389 +0000 UTC m=+0.581304350 container remove f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 11 09:14:11 compute-1 systemd[1]: libpod-conmon-f752fe76a6555c879a5254b0326afa59e5ca6cb76bbf32723f06804dcf640ab3.scope: Deactivated successfully.
Dec 11 09:14:11 compute-1 sudo[76905]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:11 compute-1 sudo[77050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:11 compute-1 sudo[77050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:11 compute-1 sudo[77050]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:11 compute-1 sudo[77075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:14:11 compute-1 sudo[77075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:12 compute-1 podman[77140]: 2025-12-11 09:14:12.193389922 +0000 UTC m=+0.090403696 container create 0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_faraday, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:12 compute-1 podman[77140]: 2025-12-11 09:14:12.124650158 +0000 UTC m=+0.021663922 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:12 compute-1 systemd[1]: Started libpod-conmon-0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037.scope.
Dec 11 09:14:12 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:12 compute-1 podman[77140]: 2025-12-11 09:14:12.265420835 +0000 UTC m=+0.162434649 container init 0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:12 compute-1 podman[77140]: 2025-12-11 09:14:12.271284569 +0000 UTC m=+0.168298333 container start 0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:14:12 compute-1 podman[77140]: 2025-12-11 09:14:12.273993485 +0000 UTC m=+0.171007279 container attach 0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_faraday, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:12 compute-1 wizardly_faraday[77156]: 167 167
Dec 11 09:14:12 compute-1 systemd[1]: libpod-0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037.scope: Deactivated successfully.
Dec 11 09:14:12 compute-1 podman[77140]: 2025-12-11 09:14:12.275754878 +0000 UTC m=+0.172768652 container died 0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_faraday, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec 11 09:14:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-c4c06cd0e84eac8a87356b459a0efeb8eb6291330d808e3011e1b5e5ab9b069b-merged.mount: Deactivated successfully.
Dec 11 09:14:12 compute-1 podman[77140]: 2025-12-11 09:14:12.309795483 +0000 UTC m=+0.206809247 container remove 0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:12 compute-1 systemd[1]: libpod-conmon-0bfe91bdf7fd85b476d28def1446e229b0796cb672045fd3582506b73242e037.scope: Deactivated successfully.
Dec 11 09:14:12 compute-1 podman[77184]: 2025-12-11 09:14:12.595955552 +0000 UTC m=+0.039939179 container create a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 11 09:14:12 compute-1 systemd[1]: Started libpod-conmon-a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d.scope.
Dec 11 09:14:12 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191b19a0d4320a1d565d83cd06026d9ede8a42dd63259414145af924606c271c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191b19a0d4320a1d565d83cd06026d9ede8a42dd63259414145af924606c271c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191b19a0d4320a1d565d83cd06026d9ede8a42dd63259414145af924606c271c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191b19a0d4320a1d565d83cd06026d9ede8a42dd63259414145af924606c271c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/191b19a0d4320a1d565d83cd06026d9ede8a42dd63259414145af924606c271c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:12 compute-1 podman[77184]: 2025-12-11 09:14:12.576641059 +0000 UTC m=+0.020624706 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:12 compute-1 podman[77184]: 2025-12-11 09:14:12.678966414 +0000 UTC m=+0.122950061 container init a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec 11 09:14:12 compute-1 podman[77184]: 2025-12-11 09:14:12.688409886 +0000 UTC m=+0.132393493 container start a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:12 compute-1 podman[77184]: 2025-12-11 09:14:12.692416134 +0000 UTC m=+0.136399751 container attach a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 11 09:14:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test[77200]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 11 09:14:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test[77200]:                             [--no-systemd] [--no-tmpfs]
Dec 11 09:14:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test[77200]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 11 09:14:12 compute-1 systemd[1]: libpod-a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d.scope: Deactivated successfully.
Dec 11 09:14:12 compute-1 podman[77184]: 2025-12-11 09:14:12.864817557 +0000 UTC m=+0.308801194 container died a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 11 09:14:12 compute-1 systemd[1]: var-lib-containers-storage-overlay-191b19a0d4320a1d565d83cd06026d9ede8a42dd63259414145af924606c271c-merged.mount: Deactivated successfully.
Dec 11 09:14:12 compute-1 podman[77184]: 2025-12-11 09:14:12.910502386 +0000 UTC m=+0.354485993 container remove a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate-test, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 11 09:14:12 compute-1 systemd[1]: libpod-conmon-a0fbd97fb5b13308c59165da8791a2b92273da41122c6c295c9a5927ba59e12d.scope: Deactivated successfully.
Dec 11 09:14:13 compute-1 systemd[1]: Reloading.
Dec 11 09:14:13 compute-1 systemd-rc-local-generator[77260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:13 compute-1 systemd-sysv-generator[77265]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:13 compute-1 systemd[1]: Reloading.
Dec 11 09:14:13 compute-1 systemd-sysv-generator[77305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:13 compute-1 systemd-rc-local-generator[77300]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:14:13 compute-1 systemd[1]: Starting Ceph osd.0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:14:13 compute-1 podman[77359]: 2025-12-11 09:14:13.914111299 +0000 UTC m=+0.087894224 container create aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:13 compute-1 podman[77359]: 2025-12-11 09:14:13.848083892 +0000 UTC m=+0.021866837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:13 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c10bf982740670bd549a5e623e2501eef209e980d30c6cedc487ccf08b1ce2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c10bf982740670bd549a5e623e2501eef209e980d30c6cedc487ccf08b1ce2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c10bf982740670bd549a5e623e2501eef209e980d30c6cedc487ccf08b1ce2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c10bf982740670bd549a5e623e2501eef209e980d30c6cedc487ccf08b1ce2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c10bf982740670bd549a5e623e2501eef209e980d30c6cedc487ccf08b1ce2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:14 compute-1 podman[77359]: 2025-12-11 09:14:14.00721279 +0000 UTC m=+0.180995765 container init aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec 11 09:14:14 compute-1 podman[77359]: 2025-12-11 09:14:14.016263332 +0000 UTC m=+0.190046237 container start aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 11 09:14:14 compute-1 podman[77359]: 2025-12-11 09:14:14.019320126 +0000 UTC m=+0.193103061 container attach aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Dec 11 09:14:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 bash[77359]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 bash[77359]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 lvm[77456]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:14 compute-1 lvm[77456]: VG ceph_vg0 finished
Dec 11 09:14:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 11 09:14:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 bash[77359]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 11 09:14:14 compute-1 bash[77359]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 bash[77359]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 11 09:14:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 11 09:14:14 compute-1 bash[77359]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 11 09:14:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 11 09:14:14 compute-1 bash[77359]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 11 09:14:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:15 compute-1 bash[77359]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:15 compute-1 bash[77359]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:15 compute-1 bash[77359]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 11 09:14:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 11 09:14:15 compute-1 bash[77359]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 11 09:14:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate[77375]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 11 09:14:15 compute-1 bash[77359]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 11 09:14:15 compute-1 systemd[1]: libpod-aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b.scope: Deactivated successfully.
Dec 11 09:14:15 compute-1 podman[77359]: 2025-12-11 09:14:15.197767321 +0000 UTC m=+1.371550256 container died aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:15 compute-1 systemd[1]: libpod-aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b.scope: Consumed 1.284s CPU time.
Dec 11 09:14:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-24c10bf982740670bd549a5e623e2501eef209e980d30c6cedc487ccf08b1ce2-merged.mount: Deactivated successfully.
Dec 11 09:14:15 compute-1 podman[77359]: 2025-12-11 09:14:15.24876666 +0000 UTC m=+1.422549575 container remove aa6138619e7284d2745f4a3c5a32174ba3ebac50588e4586dff96ec30616630b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0-activate, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec 11 09:14:15 compute-1 podman[77606]: 2025-12-11 09:14:15.471124726 +0000 UTC m=+0.044465760 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:15 compute-1 podman[77606]: 2025-12-11 09:14:15.756950738 +0000 UTC m=+0.330291612 container create f86737c5039a1397e2e5336144d04ddc97a227f2b9ad7af570d77a38c57a4f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec 11 09:14:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1efe4efc49d6070ef66f2dd817cad69d73dad0c318e031aae076265953852e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1efe4efc49d6070ef66f2dd817cad69d73dad0c318e031aae076265953852e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1efe4efc49d6070ef66f2dd817cad69d73dad0c318e031aae076265953852e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1efe4efc49d6070ef66f2dd817cad69d73dad0c318e031aae076265953852e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1efe4efc49d6070ef66f2dd817cad69d73dad0c318e031aae076265953852e8/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:15 compute-1 podman[77606]: 2025-12-11 09:14:15.867370253 +0000 UTC m=+0.440711147 container init f86737c5039a1397e2e5336144d04ddc97a227f2b9ad7af570d77a38c57a4f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:14:15 compute-1 podman[77606]: 2025-12-11 09:14:15.875503251 +0000 UTC m=+0.448844125 container start f86737c5039a1397e2e5336144d04ddc97a227f2b9ad7af570d77a38c57a4f1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 11 09:14:15 compute-1 bash[77606]: f86737c5039a1397e2e5336144d04ddc97a227f2b9ad7af570d77a38c57a4f1f
Dec 11 09:14:15 compute-1 systemd[1]: Started Ceph osd.0 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:14:15 compute-1 ceph-osd[77625]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:14:15 compute-1 ceph-osd[77625]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec 11 09:14:15 compute-1 ceph-osd[77625]: pidfile_write: ignore empty --pid-file
Dec 11 09:14:15 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:15 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:15 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:15 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:15 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:15 compute-1 sudo[77075]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:16 compute-1 sudo[77637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:16 compute-1 sudo[77637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:16 compute-1 sudo[77637]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:16 compute-1 sudo[77662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- raw list --format json
Dec 11 09:14:16 compute-1 sudo[77662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:16 compute-1 podman[77735]: 2025-12-11 09:14:16.492021233 +0000 UTC m=+0.036787482 container create a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rubin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:16 compute-1 systemd[1]: Started libpod-conmon-a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665.scope.
Dec 11 09:14:16 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:16 compute-1 podman[77735]: 2025-12-11 09:14:16.475578721 +0000 UTC m=+0.020344980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:16 compute-1 podman[77735]: 2025-12-11 09:14:16.63763869 +0000 UTC m=+0.182405039 container init a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rubin, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:14:16 compute-1 podman[77735]: 2025-12-11 09:14:16.644498418 +0000 UTC m=+0.189264667 container start a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rubin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec 11 09:14:16 compute-1 busy_rubin[77754]: 167 167
Dec 11 09:14:16 compute-1 systemd[1]: libpod-a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665.scope: Deactivated successfully.
Dec 11 09:14:16 compute-1 podman[77735]: 2025-12-11 09:14:16.651679984 +0000 UTC m=+0.196446283 container attach a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:16 compute-1 podman[77735]: 2025-12-11 09:14:16.652081444 +0000 UTC m=+0.196847703 container died a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rubin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:14:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-de792a6502134a381c15a6f6f1b9fc4dee74d5cdc9009d207189216fba8d9d1e-merged.mount: Deactivated successfully.
Dec 11 09:14:16 compute-1 podman[77735]: 2025-12-11 09:14:16.758508701 +0000 UTC m=+0.303274950 container remove a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rubin, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 11 09:14:16 compute-1 systemd[1]: libpod-conmon-a0e50142e04f83570de21b2018e1ee213ff0d6a012e4f4977516deb670064665.scope: Deactivated successfully.
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 11 09:14:16 compute-1 ceph-osd[77625]: bdev(0x5645547d1c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:16 compute-1 podman[77782]: 2025-12-11 09:14:16.883272016 +0000 UTC m=+0.024004958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:16 compute-1 podman[77782]: 2025-12-11 09:14:16.993100506 +0000 UTC m=+0.133833388 container create a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x5645547d1800 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:17 compute-1 systemd[1]: Started libpod-conmon-a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd.scope.
Dec 11 09:14:17 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ba890865978d265b1d254fe431b2ebabf709ffacab3c867732b407a1d3ec17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ba890865978d265b1d254fe431b2ebabf709ffacab3c867732b407a1d3ec17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ba890865978d265b1d254fe431b2ebabf709ffacab3c867732b407a1d3ec17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6ba890865978d265b1d254fe431b2ebabf709ffacab3c867732b407a1d3ec17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:17 compute-1 podman[77782]: 2025-12-11 09:14:17.142395524 +0000 UTC m=+0.283128376 container init a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 11 09:14:17 compute-1 podman[77782]: 2025-12-11 09:14:17.14878917 +0000 UTC m=+0.289522032 container start a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 11 09:14:17 compute-1 podman[77782]: 2025-12-11 09:14:17.173900955 +0000 UTC m=+0.314633827 container attach a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:14:17 compute-1 ceph-osd[77625]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 11 09:14:17 compute-1 ceph-osd[77625]: load: jerasure load: lrc 
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:17 compute-1 lvm[77882]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:17 compute-1 lvm[77882]: VG ceph_vg0 finished
Dec 11 09:14:17 compute-1 lvm[77886]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:14:17 compute-1 lvm[77886]: VG ceph_vg0 finished
Dec 11 09:14:17 compute-1 objective_haibt[77799]: {}
Dec 11 09:14:17 compute-1 ceph-osd[77625]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 11 09:14:17 compute-1 ceph-osd[77625]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:17 compute-1 systemd[1]: libpod-a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd.scope: Deactivated successfully.
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:17 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:17 compute-1 systemd[1]: libpod-a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd.scope: Consumed 1.104s CPU time.
Dec 11 09:14:17 compute-1 podman[77782]: 2025-12-11 09:14:17.879715203 +0000 UTC m=+1.020448145 container died a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Dec 11 09:14:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-d6ba890865978d265b1d254fe431b2ebabf709ffacab3c867732b407a1d3ec17-merged.mount: Deactivated successfully.
Dec 11 09:14:17 compute-1 podman[77782]: 2025-12-11 09:14:17.926219733 +0000 UTC m=+1.066952585 container remove a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:14:17 compute-1 systemd[1]: libpod-conmon-a19424a031518c574d18c690bac0678714aa67a53740da7ea4c4c428692dbedd.scope: Deactivated successfully.
Dec 11 09:14:17 compute-1 sudo[77662]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:18 compute-1 sudo[77910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:14:18 compute-1 sudo[77910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:18 compute-1 sudo[77910]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:18 compute-1 sudo[77937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:18 compute-1 sudo[77937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:18 compute-1 sudo[77937]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:18 compute-1 sudo[77962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:14:18 compute-1 sudo[77962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs mount
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs mount shared_bdev_used = 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: RocksDB version: 7.9.2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Git sha 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: DB SUMMARY
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: DB Session ID:  KVITPUQN4H1K87KH51WP
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: CURRENT file:  CURRENT
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: IDENTITY file:  IDENTITY
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                         Options.error_if_exists: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.create_if_missing: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                         Options.paranoid_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                                     Options.env: 0x56455563ddc0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                                Options.info_log: 0x5645556417a0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_file_opening_threads: 16
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                              Options.statistics: (nil)
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.use_fsync: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.max_log_file_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                         Options.allow_fallocate: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.use_direct_reads: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.create_missing_column_families: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                              Options.db_log_dir: 
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                                 Options.wal_dir: db.wal
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.advise_random_on_open: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.write_buffer_manager: 0x564555738a00
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                            Options.rate_limiter: (nil)
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.unordered_write: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.row_cache: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                              Options.wal_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.allow_ingest_behind: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.two_write_queues: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.manual_wal_flush: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.wal_compression: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.atomic_flush: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.log_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.allow_data_in_errors: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.db_host_id: __hostname__
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.max_background_jobs: 4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.max_background_compactions: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.max_subcompactions: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.max_open_files: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.bytes_per_sync: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.max_background_flushes: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Compression algorithms supported:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kZSTD supported: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kXpressCompression supported: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kBZip2Compression supported: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kLZ4Compression supported: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kZlibCompression supported: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kLZ4HCCompression supported: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         kSnappyCompression supported: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5645548669b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5645548669b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5645548669b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 522acc89-4868-4c50-bf5c-0e242f77f17a
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458771619, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444458771844, "job": 1, "event": "recovery_finished"}
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 11 09:14:18 compute-1 ceph-osd[77625]: freelist init
Dec 11 09:14:18 compute-1 ceph-osd[77625]: freelist _read_cfg
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 11 09:14:18 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bluefs umount
Dec 11 09:14:18 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) close
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bdev(0x56455566d000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs mount
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluefs mount shared_bdev_used = 4718592
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: RocksDB version: 7.9.2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Git sha 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: DB SUMMARY
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: DB Session ID:  KVITPUQN4H1K87KH51WO
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: CURRENT file:  CURRENT
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: IDENTITY file:  IDENTITY
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                         Options.error_if_exists: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.create_if_missing: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                         Options.paranoid_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                                     Options.env: 0x5645557dc2a0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                                Options.info_log: 0x564555641920
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_file_opening_threads: 16
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                              Options.statistics: (nil)
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.use_fsync: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.max_log_file_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                         Options.allow_fallocate: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.use_direct_reads: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.create_missing_column_families: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                              Options.db_log_dir: 
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                                 Options.wal_dir: db.wal
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.advise_random_on_open: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.write_buffer_manager: 0x564555738a00
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                            Options.rate_limiter: (nil)
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.unordered_write: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.row_cache: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                              Options.wal_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.allow_ingest_behind: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.two_write_queues: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.manual_wal_flush: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.wal_compression: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.atomic_flush: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.log_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.allow_data_in_errors: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.db_host_id: __hostname__
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.max_background_jobs: 4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.max_background_compactions: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.max_subcompactions: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.max_open_files: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.bytes_per_sync: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.max_background_flushes: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Compression algorithms supported:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kZSTD supported: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kXpressCompression supported: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kBZip2Compression supported: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kLZ4Compression supported: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kZlibCompression supported: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kLZ4HCCompression supported: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         kSnappyCompression supported: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564554867350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5645548669b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5645548669b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:           Options.merge_operator: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564555641ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5645548669b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.write_buffer_size: 16777216
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.max_write_buffer_number: 64
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.compression: LZ4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.num_levels: 7
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 522acc89-4868-4c50-bf5c-0e242f77f17a
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444459038451, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444459045928, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444459, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "522acc89-4868-4c50-bf5c-0e242f77f17a", "db_session_id": "KVITPUQN4H1K87KH51WO", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444459196447, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 466, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444459, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "522acc89-4868-4c50-bf5c-0e242f77f17a", "db_session_id": "KVITPUQN4H1K87KH51WO", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444459199259, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444459, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "522acc89-4868-4c50-bf5c-0e242f77f17a", "db_session_id": "KVITPUQN4H1K87KH51WO", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444459200821, "job": 1, "event": "recovery_finished"}
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564555808000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: DB pointer 0x5645557e8000
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 11 09:14:19 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 11 09:14:19 compute-1 ceph-osd[77625]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.55 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.150       0      0       0.0       0.0
                                            Sum      1/0    1.55 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.150       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.150       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.150       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5645548669b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5645548669b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5645548669b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564554867350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 11 09:14:19 compute-1 ceph-osd[77625]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 11 09:14:19 compute-1 ceph-osd[77625]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 11 09:14:19 compute-1 ceph-osd[77625]: _get_class not permitted to load lua
Dec 11 09:14:19 compute-1 ceph-osd[77625]: _get_class not permitted to load sdk
Dec 11 09:14:19 compute-1 ceph-osd[77625]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 11 09:14:19 compute-1 ceph-osd[77625]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 11 09:14:19 compute-1 ceph-osd[77625]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 11 09:14:19 compute-1 ceph-osd[77625]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 11 09:14:19 compute-1 ceph-osd[77625]: osd.0 0 load_pgs
Dec 11 09:14:19 compute-1 ceph-osd[77625]: osd.0 0 load_pgs opened 0 pgs
Dec 11 09:14:19 compute-1 ceph-osd[77625]: osd.0 0 log_to_monitors true
Dec 11 09:14:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0[77621]: 2025-12-11T09:14:19.237+0000 7f77bbb78740 -1 osd.0 0 log_to_monitors true
Dec 11 09:14:19 compute-1 podman[78459]: 2025-12-11 09:14:19.328767836 +0000 UTC m=+0.071631075 container exec 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 11 09:14:19 compute-1 podman[78459]: 2025-12-11 09:14:19.439636402 +0000 UTC m=+0.182499621 container exec_died 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 11 09:14:19 compute-1 sudo[77962]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:20 compute-1 sudo[78512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:20 compute-1 sudo[78512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:20 compute-1 sudo[78512]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:20 compute-1 sudo[78537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060 -- inventory --format=json-pretty --filter-for-batch
Dec 11 09:14:20 compute-1 sudo[78537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:20 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 11 09:14:20 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 11 09:14:20 compute-1 ceph-osd[77625]: osd.0 0 done with init, starting boot process
Dec 11 09:14:20 compute-1 ceph-osd[77625]: osd.0 0 start_boot
Dec 11 09:14:20 compute-1 ceph-osd[77625]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 11 09:14:20 compute-1 ceph-osd[77625]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 11 09:14:20 compute-1 ceph-osd[77625]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 11 09:14:20 compute-1 ceph-osd[77625]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 11 09:14:20 compute-1 ceph-osd[77625]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 11 09:14:20 compute-1 podman[78603]: 2025-12-11 09:14:20.428717729 +0000 UTC m=+0.022106803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:20 compute-1 podman[78603]: 2025-12-11 09:14:20.593860604 +0000 UTC m=+0.187249638 container create ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_noyce, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:20 compute-1 systemd[1]: Started libpod-conmon-ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d.scope.
Dec 11 09:14:20 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:20 compute-1 podman[78603]: 2025-12-11 09:14:20.727387133 +0000 UTC m=+0.320776217 container init ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec 11 09:14:20 compute-1 podman[78603]: 2025-12-11 09:14:20.73704393 +0000 UTC m=+0.330432974 container start ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:14:20 compute-1 admiring_noyce[78620]: 167 167
Dec 11 09:14:20 compute-1 systemd[1]: libpod-ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d.scope: Deactivated successfully.
Dec 11 09:14:20 compute-1 podman[78603]: 2025-12-11 09:14:20.757946352 +0000 UTC m=+0.351335416 container attach ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_noyce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec 11 09:14:20 compute-1 podman[78603]: 2025-12-11 09:14:20.758587058 +0000 UTC m=+0.351976102 container died ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-1c15c88612a8ef4ef391eff72cbe860056d227cbfe8eebe321f5c06d7ec6026c-merged.mount: Deactivated successfully.
Dec 11 09:14:21 compute-1 podman[78603]: 2025-12-11 09:14:21.022327588 +0000 UTC m=+0.615716672 container remove ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_noyce, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Dec 11 09:14:21 compute-1 systemd[1]: libpod-conmon-ea2e560568e0eb619cfc2ee2142ddf62dc482a94ff5e9be406affa23d302d26d.scope: Deactivated successfully.
Dec 11 09:14:21 compute-1 podman[78646]: 2025-12-11 09:14:21.223911266 +0000 UTC m=+0.052348104 container create 66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:21 compute-1 systemd[1]: Started libpod-conmon-66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c.scope.
Dec 11 09:14:21 compute-1 podman[78646]: 2025-12-11 09:14:21.193642434 +0000 UTC m=+0.022079252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:21 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6e4bd4d56df1df05a3bf7d02f5a4e0ca8d0548e6b0798280f453ee2d5af677/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6e4bd4d56df1df05a3bf7d02f5a4e0ca8d0548e6b0798280f453ee2d5af677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6e4bd4d56df1df05a3bf7d02f5a4e0ca8d0548e6b0798280f453ee2d5af677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6e4bd4d56df1df05a3bf7d02f5a4e0ca8d0548e6b0798280f453ee2d5af677/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:21 compute-1 podman[78646]: 2025-12-11 09:14:21.450440865 +0000 UTC m=+0.278877773 container init 66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:21 compute-1 podman[78646]: 2025-12-11 09:14:21.4567939 +0000 UTC m=+0.285230758 container start 66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:14:21 compute-1 podman[78646]: 2025-12-11 09:14:21.508371203 +0000 UTC m=+0.336808021 container attach 66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_banzai, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 11 09:14:22 compute-1 loving_banzai[78662]: [
Dec 11 09:14:22 compute-1 loving_banzai[78662]:     {
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "available": false,
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "being_replaced": false,
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "ceph_device_lvm": false,
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "lsm_data": {},
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "lvs": [],
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "path": "/dev/sr0",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "rejected_reasons": [
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "Has a FileSystem",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "Insufficient space (<5GB)"
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         ],
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         "sys_api": {
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "actuators": null,
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "device_nodes": [
Dec 11 09:14:22 compute-1 loving_banzai[78662]:                 "sr0"
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             ],
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "devname": "sr0",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "human_readable_size": "482.00 KB",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "id_bus": "ata",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "model": "QEMU DVD-ROM",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "nr_requests": "2",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "parent": "/dev/sr0",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "partitions": {},
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "path": "/dev/sr0",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "removable": "1",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "rev": "2.5+",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "ro": "0",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "rotational": "1",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "sas_address": "",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "sas_device_handle": "",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "scheduler_mode": "mq-deadline",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "sectors": 0,
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "sectorsize": "2048",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "size": 493568.0,
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "support_discard": "2048",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "type": "disk",
Dec 11 09:14:22 compute-1 loving_banzai[78662]:             "vendor": "QEMU"
Dec 11 09:14:22 compute-1 loving_banzai[78662]:         }
Dec 11 09:14:22 compute-1 loving_banzai[78662]:     }
Dec 11 09:14:22 compute-1 loving_banzai[78662]: ]
Dec 11 09:14:22 compute-1 systemd[1]: libpod-66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c.scope: Deactivated successfully.
Dec 11 09:14:22 compute-1 podman[78646]: 2025-12-11 09:14:22.251342331 +0000 UTC m=+1.079779149 container died 66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_banzai, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:14:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-8f6e4bd4d56df1df05a3bf7d02f5a4e0ca8d0548e6b0798280f453ee2d5af677-merged.mount: Deactivated successfully.
Dec 11 09:14:22 compute-1 podman[78646]: 2025-12-11 09:14:22.510688874 +0000 UTC m=+1.339125692 container remove 66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 11 09:14:22 compute-1 systemd[1]: libpod-conmon-66885bca142c5cf368d24e957cfa83b1185eb648c2242efe007f00ad9e373b6c.scope: Deactivated successfully.
Dec 11 09:14:22 compute-1 sudo[78537]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.727 iops: 6842.182 elapsed_sec: 0.438
Dec 11 09:14:26 compute-1 ceph-osd[77625]: log_channel(cluster) log [WRN] : OSD bench result of 6842.182499 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 0 waiting for initial osdmap
Dec 11 09:14:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0[77621]: 2025-12-11T09:14:26.546+0000 7f77b7afb640 -1 osd.0 0 waiting for initial osdmap
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 7 set_numa_affinity not setting numa affinity
Dec 11 09:14:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-osd-0[77621]: 2025-12-11T09:14:26.594+0000 7f77b3123640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 11 09:14:26 compute-1 ceph-osd[77625]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 11 09:14:27 compute-1 ceph-osd[77625]: osd.0 8 state: booting -> active
Dec 11 09:14:29 compute-1 ceph-osd[77625]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 11 09:14:29 compute-1 ceph-osd[77625]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 11 09:14:29 compute-1 ceph-osd[77625]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 11 09:14:29 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 10 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 11 pg[1.0( empty local-lis/les=10/11 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=10) [0] r=0 lpr=10 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:46 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 21 pg[7.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:47 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 22 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:53 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 26 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26 pruub=10.243951797s) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active pruub 44.247268677s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:14:53 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 26 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26 pruub=10.243951797s) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown pruub 44.247268677s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1f( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1d( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1e( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1b( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.a( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.8( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1c( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.9( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.7( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.6( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.2( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.4( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.5( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.3( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.c( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.b( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.d( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.e( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.10( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.11( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.12( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.13( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.14( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.15( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.16( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.17( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.18( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.19( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1a( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.f( empty local-lis/les=13/14 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.8( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.9( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.7( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.6( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.2( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.4( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.0( empty local-lis/les=26/27 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.3( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.10( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.11( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.13( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.14( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.15( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.1a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.19( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.17( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 27 pg[2.16( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=13/13 les/c/f=14/14/0 sis=26) [0] r=0 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:14:54 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 11 09:14:54 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 11 09:14:55 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 11 09:14:55 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 11 09:14:56 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 11 09:14:56 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 11 09:14:57 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 11 09:14:57 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 11 09:14:58 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 11 09:14:58 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 11 09:14:58 compute-1 sudo[79694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:14:58 compute-1 sudo[79694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:58 compute-1 sudo[79694]: pam_unix(sudo:session): session closed for user root
Dec 11 09:14:58 compute-1 sudo[79719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:14:58 compute-1 sudo[79719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:14:59 compute-1 podman[79786]: 2025-12-11 09:14:59.370008001 +0000 UTC m=+0.040466294 container create 15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_blackburn, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:59 compute-1 systemd[1]: Started libpod-conmon-15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8.scope.
Dec 11 09:14:59 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:59 compute-1 podman[79786]: 2025-12-11 09:14:59.442624848 +0000 UTC m=+0.113083161 container init 15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_blackburn, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 11 09:14:59 compute-1 podman[79786]: 2025-12-11 09:14:59.349363878 +0000 UTC m=+0.019822191 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:59 compute-1 podman[79786]: 2025-12-11 09:14:59.45039043 +0000 UTC m=+0.120848723 container start 15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:14:59 compute-1 dazzling_blackburn[79802]: 167 167
Dec 11 09:14:59 compute-1 podman[79786]: 2025-12-11 09:14:59.456713633 +0000 UTC m=+0.127171926 container attach 15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:14:59 compute-1 systemd[1]: libpod-15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8.scope: Deactivated successfully.
Dec 11 09:14:59 compute-1 podman[79786]: 2025-12-11 09:14:59.458127081 +0000 UTC m=+0.128585534 container died 15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 11 09:14:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-ff0161f2a067ed5811f716dd00e9856f3cd180aad4175e9226f96f3c1c738c05-merged.mount: Deactivated successfully.
Dec 11 09:14:59 compute-1 podman[79786]: 2025-12-11 09:14:59.498932333 +0000 UTC m=+0.169390636 container remove 15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_blackburn, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 11 09:14:59 compute-1 systemd[1]: libpod-conmon-15cb7ff31096ac1d912a771c1bc80440b6ad967ac7dcaa9fb0c9f2f743e06fa8.scope: Deactivated successfully.
Dec 11 09:14:59 compute-1 podman[79818]: 2025-12-11 09:14:59.56049229 +0000 UTC m=+0.041022239 container create e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 11 09:14:59 compute-1 systemd[1]: Started libpod-conmon-e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3.scope.
Dec 11 09:14:59 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:14:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50895543c9cd98dfd97d6ed67704911e63a6cb8866a1a64dfe9ce2ed337eacd/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50895543c9cd98dfd97d6ed67704911e63a6cb8866a1a64dfe9ce2ed337eacd/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50895543c9cd98dfd97d6ed67704911e63a6cb8866a1a64dfe9ce2ed337eacd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:59 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a50895543c9cd98dfd97d6ed67704911e63a6cb8866a1a64dfe9ce2ed337eacd/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:14:59 compute-1 podman[79818]: 2025-12-11 09:14:59.637262671 +0000 UTC m=+0.117792660 container init e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_meninsky, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec 11 09:14:59 compute-1 podman[79818]: 2025-12-11 09:14:59.543742313 +0000 UTC m=+0.024272282 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:14:59 compute-1 podman[79818]: 2025-12-11 09:14:59.643063739 +0000 UTC m=+0.123593698 container start e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec 11 09:14:59 compute-1 podman[79818]: 2025-12-11 09:14:59.646480332 +0000 UTC m=+0.127010301 container attach e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_meninsky, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:14:59 compute-1 systemd[1]: libpod-e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3.scope: Deactivated successfully.
Dec 11 09:14:59 compute-1 podman[79818]: 2025-12-11 09:14:59.729618547 +0000 UTC m=+0.210148496 container died e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 11 09:14:59 compute-1 systemd[1]: var-lib-containers-storage-overlay-a50895543c9cd98dfd97d6ed67704911e63a6cb8866a1a64dfe9ce2ed337eacd-merged.mount: Deactivated successfully.
Dec 11 09:14:59 compute-1 podman[79818]: 2025-12-11 09:14:59.759624994 +0000 UTC m=+0.240154963 container remove e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 11 09:14:59 compute-1 systemd[1]: libpod-conmon-e50ccb2e759b54960883464eb8d39bc2e676c4bbdc53544d0600629a741b17d3.scope: Deactivated successfully.
Dec 11 09:14:59 compute-1 systemd[1]: Reloading.
Dec 11 09:14:59 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 11 09:14:59 compute-1 systemd-sysv-generator[79899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:14:59 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 11 09:14:59 compute-1 systemd-rc-local-generator[79894]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:15:00 compute-1 systemd[1]: Reloading.
Dec 11 09:15:00 compute-1 systemd-rc-local-generator[79939]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:15:00 compute-1 systemd-sysv-generator[79942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:15:00 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:15:00 compute-1 podman[79998]: 2025-12-11 09:15:00.53223369 +0000 UTC m=+0.038777026 container create 3a594a069e6ae25a3a05b53e84fae8dcf36fc234f19ca5c1fdf147ae2ca52592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bccbaa56b7da6a2675899468d166dc59f5a0fc5b0797a6b41033dd3d2616c0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bccbaa56b7da6a2675899468d166dc59f5a0fc5b0797a6b41033dd3d2616c0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bccbaa56b7da6a2675899468d166dc59f5a0fc5b0797a6b41033dd3d2616c0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:00 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bccbaa56b7da6a2675899468d166dc59f5a0fc5b0797a6b41033dd3d2616c0e/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:00 compute-1 podman[79998]: 2025-12-11 09:15:00.586791327 +0000 UTC m=+0.093334673 container init 3a594a069e6ae25a3a05b53e84fae8dcf36fc234f19ca5c1fdf147ae2ca52592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec 11 09:15:00 compute-1 podman[79998]: 2025-12-11 09:15:00.594628411 +0000 UTC m=+0.101171747 container start 3a594a069e6ae25a3a05b53e84fae8dcf36fc234f19ca5c1fdf147ae2ca52592 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mon-compute-1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:15:00 compute-1 bash[79998]: 3a594a069e6ae25a3a05b53e84fae8dcf36fc234f19ca5c1fdf147ae2ca52592
Dec 11 09:15:00 compute-1 podman[79998]: 2025-12-11 09:15:00.516900733 +0000 UTC m=+0.023444069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:00 compute-1 systemd[1]: Started Ceph mon.compute-1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:15:00 compute-1 ceph-mon[80018]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:15:00 compute-1 ceph-mon[80018]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:00 compute-1 ceph-mon[80018]: load: jerasure load: lrc 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: RocksDB version: 7.9.2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Git sha 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Compile date 2025-07-17 03:12:14
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: DB SUMMARY
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: DB Session ID:  AQJDRPP5WSRURMC1H049
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: CURRENT file:  CURRENT
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: IDENTITY file:  IDENTITY
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                         Options.error_if_exists: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                       Options.create_if_missing: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                         Options.paranoid_checks: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                                     Options.env: 0x55624f5abc20
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                                Options.info_log: 0x5562513a7a20
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.max_file_opening_threads: 16
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                              Options.statistics: (nil)
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                               Options.use_fsync: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                       Options.max_log_file_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                         Options.allow_fallocate: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                        Options.use_direct_reads: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:          Options.create_missing_column_families: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                              Options.db_log_dir: 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                                 Options.wal_dir: 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.advise_random_on_open: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                    Options.write_buffer_manager: 0x5562513ab900
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                            Options.rate_limiter: (nil)
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.unordered_write: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                               Options.row_cache: None
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                              Options.wal_filter: None
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.allow_ingest_behind: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.two_write_queues: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.manual_wal_flush: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.wal_compression: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.atomic_flush: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.log_readahead_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.allow_data_in_errors: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.db_host_id: __hostname__
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.max_background_jobs: 2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.max_background_compactions: -1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.max_subcompactions: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.max_total_wal_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                          Options.max_open_files: -1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                          Options.bytes_per_sync: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:       Options.compaction_readahead_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.max_background_flushes: -1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Compression algorithms supported:
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kZSTD supported: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kXpressCompression supported: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kBZip2Compression supported: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kLZ4Compression supported: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kZlibCompression supported: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kLZ4HCCompression supported: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         kSnappyCompression supported: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:           Options.merge_operator: 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:        Options.compaction_filter: None
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:        Options.compaction_filter_factory: None
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:  Options.sst_partitioner_factory: None
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5562513a65c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5562513cb350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:        Options.write_buffer_size: 33554432
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:  Options.max_write_buffer_number: 2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:          Options.compression: NoCompression
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:       Options.prefix_extractor: nullptr
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.num_levels: 7
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.compression_opts.level: 32767
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:               Options.compression_opts.strategy: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                  Options.compression_opts.enabled: false
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                        Options.arena_block_size: 1048576
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.disable_auto_compactions: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.inplace_update_support: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                           Options.bloom_locality: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                    Options.max_successive_merges: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.paranoid_file_checks: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.force_consistency_checks: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.report_bg_io_stats: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                               Options.ttl: 2592000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                       Options.enable_blob_files: false
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                           Options.min_blob_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                          Options.blob_file_size: 268435456
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb:                Options.blob_file_starting_level: 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e39b1f64-5981-400c-b4cc-531ee396f1c6
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444500646489, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444500648462, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444500, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444500648616, "job": 1, "event": "recovery_finished"}
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 11 09:15:00 compute-1 sudo[79719]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5562513cce00
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: DB pointer 0x5562514d6000
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 11 09:15:00 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5562513cb350#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 6.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 11 09:15:00 compute-1 ceph-mon[80018]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Dec 11 09:15:00 compute-1 ceph-mon[80018]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(???) e0 preinit fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).mds e1 new map
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           btime 2025-12-11T09:12:26:191373+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e30 e30: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e31 e31: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 3314933000852226048, adjusting msgr requires
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).osd e31 crush map has features 288514051259236352, adjusting msgr requires
Dec 11 09:15:00 compute-1 ceph-mon[80018]: pgmap v73: 7 pgs: 1 creating+peering, 2 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/777583152' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e23: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4123948204' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: pgmap v75: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4123948204' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e24: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e25: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1917506725' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1917506725' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e26: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/235673189' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/235673189' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e27: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2933185597' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2933185597' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e28: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: pgmap v81: 38 pgs: 31 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 2.1f scrub starts
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 2.1f scrub ok
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2924337962' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2924337962' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e29: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 2.1d scrub starts
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 2.1d scrub ok
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:00 compute-1 ceph-mon[80018]: pgmap v84: 100 pgs: 93 unknown, 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Deploying daemon mon.compute-2 on compute-2
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 2.1e scrub starts
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 2.1e scrub ok
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 4.1e scrub starts
Dec 11 09:15:00 compute-1 ceph-mon[80018]: 4.1e scrub ok
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:15:00 compute-1 ceph-mon[80018]: osdmap e30: 2 total, 2 up, 2 in
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Dec 11 09:15:00 compute-1 ceph-mon[80018]: Cluster is now healthy
Dec 11 09:15:00 compute-1 ceph-mon[80018]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Dec 11 09:15:00 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 11 09:15:00 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 11 09:15:01 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 11 09:15:01 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 11 09:15:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1 deep-scrub starts
Dec 11 09:15:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1 deep-scrub ok
Dec 11 09:15:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 11 09:15:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.19( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093620300s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.894084930s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.19( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093585968s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.894084930s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.15( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093544006s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.894084930s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.13( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093458176s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.894016266s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.10( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093339920s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893917084s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.15( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093508720s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.894084930s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.10( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093323708s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893917084s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.13( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.093426704s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.894016266s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092829704s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893878937s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092854500s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893901825s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092802048s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893878937s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092814445s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893901825s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=15.788428307s) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active pruub 60.589649200s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092372894s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893646240s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092357635s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893646240s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.4( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092226982s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893657684s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.4( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092214584s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893657684s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.6( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092039108s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893589020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.6( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092020988s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893589020s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092197418s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893787384s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.092175484s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893787384s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.9( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.091923714s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893569946s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.091911316s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893608093s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.9( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.091896057s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893569946s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.a( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.091875076s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893608093s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.091632843s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.893497467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.091615677s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.893497467s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.087713242s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.889629364s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1e( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.087691307s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.889629364s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.087525368s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 active pruub 58.889663696s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[2.1f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=32 pruub=14.087457657s) [1] r=-1 lpr=32 pi=[26,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 58.889663696s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[7.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=32 pruub=15.788428307s) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown pruub 60.589649200s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.18( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.18( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.1d( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.1a( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.1b( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.1c( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.1b( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.1a( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.1a( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.1c( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.9( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.e( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.2( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.e( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.3( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.5( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.4( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.7( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.1( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.5( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.1( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.d( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.a( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.f( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.c( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.a( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.9( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.e( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.d( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.c( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.9( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.8( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.f( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.16( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.10( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.11( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.15( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.13( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.15( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.14( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.13( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.15( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[3.16( empty local-lis/les=0/0 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.11( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[4.1f( empty local-lis/les=0/0 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.10( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[5.1f( empty local-lis/les=0/0 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=0/0 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 11 09:15:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 11 09:15:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 11 09:15:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 11 09:15:06 compute-1 ceph-mon[80018]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Dec 11 09:15:06 compute-1 ceph-mon[80018]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Dec 11 09:15:06 compute-1 ceph-mon[80018]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 11 09:15:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 11 09:15:06 compute-1 ceph-mon[80018]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 11 09:15:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 11 09:15:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 11 09:15:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Dec 11 09:15:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Dec 11 09:15:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 11 09:15:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e32 e32: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-1 ceph-mon[80018]: Deploying daemon mon.compute-1 on compute-1
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.9 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.9 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-0 calling monitor election
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 4.1f scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 4.1f scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.7 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.7 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.17 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.17 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: pgmap v88: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.2 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.2 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-2 calling monitor election
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 4.11 deep-scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 4.11 deep-scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.6 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.6 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.16 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.16 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: pgmap v89: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.1 deep-scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.1 deep-scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.18 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.18 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec 11 09:15:09 compute-1 ceph-mon[80018]: monmap epoch 2
Dec 11 09:15:09 compute-1 ceph-mon[80018]: fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:09 compute-1 ceph-mon[80018]: last_changed 2025-12-11T09:14:58.875471+0000
Dec 11 09:15:09 compute-1 ceph-mon[80018]: created 2025-12-11T09:12:23.814502+0000
Dec 11 09:15:09 compute-1 ceph-mon[80018]: min_mon_release 19 (squid)
Dec 11 09:15:09 compute-1 ceph-mon[80018]: election_strategy: 1
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 11 09:15:09 compute-1 ceph-mon[80018]: fsmap 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: osdmap e31: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mgrmap e9: compute-0.wwpcae(active, since 2m)
Dec 11 09:15:09 compute-1 ceph-mon[80018]: overall HEALTH_OK
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e33 e33: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1f( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1c( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1d( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.12( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.13( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.10( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.11( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.16( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.14( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.15( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.a( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.b( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.8( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.17( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.9( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.e( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.6( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.5( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.4( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.7( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.3( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.2( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.d( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.c( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1e( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.19( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.f( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.18( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1b( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1a( empty local-lis/les=21/22 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.1f( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1c( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1d( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.11( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.10( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.13( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.14( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.12( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.10( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.16( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.15( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.13( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.11( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.16( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.10( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.16( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.11( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.14( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.e( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.15( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.a( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.f( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.15( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.9( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.c( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.b( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.d( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.8( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.4( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.17( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.e( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.a( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.7( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.2( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.6( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.5( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.0( empty local-lis/les=32/33 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.3( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.4( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.7( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.9( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.1( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.5( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.f( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.2( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.9( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.3( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.e( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.d( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.c( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.c( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.1a( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.1b( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1e( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.1d( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.1c( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.1a( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.18( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1b( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[5.18( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.1a( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=30/30 les/c/f=31/31/0 sis=32) [0] r=0 lpr=32 pi=[30,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.19( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[3.1c( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=28/28 les/c/f=29/29/0 sis=32) [0] r=0 lpr=32 pi=[28,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 33 pg[7.f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=21/21 les/c/f=22/22/0 sis=32) [0] r=0 lpr=32 pi=[21,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.1f deep-scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.1f deep-scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-0 calling monitor election
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-2 calling monitor election
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.18 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.18 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 5.19 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 5.19 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.17 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.17 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.1e scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 3.1e scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: pgmap v92: 193 pgs: 15 peering, 31 unknown, 147 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-1 calling monitor election
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.1a scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.1a scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 4.19 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 4.19 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.14 scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.14 scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 6.1b scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 6.1b scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: pgmap v93: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.12 deep-scrub starts
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2.12 deep-scrub ok
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec 11 09:15:09 compute-1 ceph-mon[80018]: monmap epoch 3
Dec 11 09:15:09 compute-1 ceph-mon[80018]: fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:09 compute-1 ceph-mon[80018]: last_changed 2025-12-11T09:15:04.733539+0000
Dec 11 09:15:09 compute-1 ceph-mon[80018]: created 2025-12-11T09:12:23.814502+0000
Dec 11 09:15:09 compute-1 ceph-mon[80018]: min_mon_release 19 (squid)
Dec 11 09:15:09 compute-1 ceph-mon[80018]: election_strategy: 1
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec 11 09:15:09 compute-1 ceph-mon[80018]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec 11 09:15:09 compute-1 ceph-mon[80018]: fsmap 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: osdmap e32: 2 total, 2 up, 2 in
Dec 11 09:15:09 compute-1 ceph-mon[80018]: mgrmap e9: compute-0.wwpcae(active, since 2m)
Dec 11 09:15:09 compute-1 ceph-mon[80018]: overall HEALTH_OK
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:09 compute-1 sudo[80058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:09 compute-1 sudo[80058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:09 compute-1 sudo[80058]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:10 compute-1 sudo[80083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:10 compute-1 sudo[80083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:10 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 handle_auth_request failed to assign global_id
Dec 11 09:15:10 compute-1 podman[80147]: 2025-12-11 09:15:10.397486474 +0000 UTC m=+0.046486598 container create b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_heyrovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:10 compute-1 systemd[1]: Started libpod-conmon-b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9.scope.
Dec 11 09:15:10 compute-1 podman[80147]: 2025-12-11 09:15:10.375066513 +0000 UTC m=+0.024066657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:10 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:15:10 compute-1 podman[80147]: 2025-12-11 09:15:10.485439959 +0000 UTC m=+0.134440143 container init b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:10 compute-1 podman[80147]: 2025-12-11 09:15:10.492665116 +0000 UTC m=+0.141665210 container start b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_heyrovsky, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:15:10 compute-1 podman[80147]: 2025-12-11 09:15:10.496381247 +0000 UTC m=+0.145381421 container attach b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:15:10 compute-1 vigilant_heyrovsky[80164]: 167 167
Dec 11 09:15:10 compute-1 systemd[1]: libpod-b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9.scope: Deactivated successfully.
Dec 11 09:15:10 compute-1 podman[80147]: 2025-12-11 09:15:10.500602713 +0000 UTC m=+0.149602807 container died b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_heyrovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 11 09:15:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-e37b9af3aaa40b3e58d58900aa57455dc2f89e2c2d9dec5740c2ed728e480317-merged.mount: Deactivated successfully.
Dec 11 09:15:10 compute-1 podman[80147]: 2025-12-11 09:15:10.536554572 +0000 UTC m=+0.185554656 container remove b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_heyrovsky, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:10 compute-1 systemd[1]: libpod-conmon-b8a1e377dd119aa481e672d6da031e735597ddeb731f75f663d0a915f71c94d9.scope: Deactivated successfully.
Dec 11 09:15:10 compute-1 systemd[1]: Reloading.
Dec 11 09:15:10 compute-1 systemd-rc-local-generator[80207]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:15:10 compute-1 systemd-sysv-generator[80210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:15:10 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1019927280 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 11 09:15:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 11 09:15:10 compute-1 systemd[1]: Reloading.
Dec 11 09:15:10 compute-1 ceph-mon[80018]: 6.18 deep-scrub starts
Dec 11 09:15:10 compute-1 ceph-mon[80018]: 6.18 deep-scrub ok
Dec 11 09:15:10 compute-1 ceph-mon[80018]: 2.11 scrub starts
Dec 11 09:15:10 compute-1 ceph-mon[80018]: 2.11 scrub ok
Dec 11 09:15:10 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unesvp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:15:10 compute-1 ceph-mon[80018]: osdmap e33: 2 total, 2 up, 2 in
Dec 11 09:15:10 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.unesvp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 11 09:15:10 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:15:10 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:10 compute-1 ceph-mon[80018]: Deploying daemon mgr.compute-1.unesvp on compute-1
Dec 11 09:15:10 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1693698922' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec 11 09:15:10 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1693698922' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 11 09:15:10 compute-1 ceph-mon[80018]: pgmap v95: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:10 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:10 compute-1 systemd-rc-local-generator[80248]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:15:10 compute-1 systemd-sysv-generator[80252]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:15:11 compute-1 systemd[1]: Starting Ceph mgr.compute-1.unesvp for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:15:11 compute-1 podman[80306]: 2025-12-11 09:15:11.317024173 +0000 UTC m=+0.038341935 container create a882a922b14672b8eb0aa7229d1f2105772139169c180689f9304ac5dbc967ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:15:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c97322aebc5a9714d2bc4f4fcf24b9caf55fae09e07932e26c70d5d5242ccb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c97322aebc5a9714d2bc4f4fcf24b9caf55fae09e07932e26c70d5d5242ccb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c97322aebc5a9714d2bc4f4fcf24b9caf55fae09e07932e26c70d5d5242ccb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:11 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c97322aebc5a9714d2bc4f4fcf24b9caf55fae09e07932e26c70d5d5242ccb/merged/var/lib/ceph/mgr/ceph-compute-1.unesvp supports timestamps until 2038 (0x7fffffff)
Dec 11 09:15:11 compute-1 podman[80306]: 2025-12-11 09:15:11.37347969 +0000 UTC m=+0.094797472 container init a882a922b14672b8eb0aa7229d1f2105772139169c180689f9304ac5dbc967ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:15:11 compute-1 podman[80306]: 2025-12-11 09:15:11.377825339 +0000 UTC m=+0.099143101 container start a882a922b14672b8eb0aa7229d1f2105772139169c180689f9304ac5dbc967ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:15:11 compute-1 bash[80306]: a882a922b14672b8eb0aa7229d1f2105772139169c180689f9304ac5dbc967ad
Dec 11 09:15:11 compute-1 podman[80306]: 2025-12-11 09:15:11.299860295 +0000 UTC m=+0.021178097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:15:11 compute-1 systemd[1]: Started Ceph mgr.compute-1.unesvp for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:11 compute-1 sudo[80083]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'alerts'
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'balancer'
Dec 11 09:15:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:11.561+0000 7fefaad4e140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:11 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'cephadm'
Dec 11 09:15:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:11.642+0000 7fefaad4e140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:11 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 11 09:15:11 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 11 09:15:12 compute-1 ceph-mon[80018]: 3.1b scrub starts
Dec 11 09:15:12 compute-1 ceph-mon[80018]: 3.1b scrub ok
Dec 11 09:15:12 compute-1 ceph-mon[80018]: 2.f scrub starts
Dec 11 09:15:12 compute-1 ceph-mon[80018]: 2.f scrub ok
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1088465051' entity='client.admin' 
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 11 09:15:12 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:12 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'crash'
Dec 11 09:15:12 compute-1 ceph-mgr[80326]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:12.480+0000 7fefaad4e140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:12 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'dashboard'
Dec 11 09:15:12 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 11 09:15:12 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:15:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:13.157+0000 7fefaad4e140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 4.1c scrub starts
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 4.1c scrub ok
Dec 11 09:15:13 compute-1 ceph-mon[80018]: Deploying daemon crash.compute-2 on compute-2
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 2.16 scrub starts
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 2.16 scrub ok
Dec 11 09:15:13 compute-1 ceph-mon[80018]: pgmap v96: 193 pgs: 78 peering, 31 unknown, 84 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 5.1d scrub starts
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 5.1d scrub ok
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 2.3 scrub starts
Dec 11 09:15:13 compute-1 ceph-mon[80018]: 2.3 scrub ok
Dec 11 09:15:13 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:15:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:15:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]:   from numpy import show_config as show_numpy_config
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'influx'
Dec 11 09:15:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:13.333+0000 7fefaad4e140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'insights'
Dec 11 09:15:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:13.416+0000 7fefaad4e140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'iostat'
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:13.572+0000 7fefaad4e140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:13 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:15:13 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 11 09:15:13 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'localpool'
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mirroring'
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'nfs'
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:14 compute-1 ceph-mon[80018]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:14 compute-1 ceph-mon[80018]: Saving service ingress.rgw.default spec with placement count:2
Dec 11 09:15:14 compute-1 ceph-mon[80018]: 6.1f scrub starts
Dec 11 09:15:14 compute-1 ceph-mon[80018]: 6.1f scrub ok
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:15:14 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:14 compute-1 ceph-mon[80018]: 2.5 scrub starts
Dec 11 09:15:14 compute-1 ceph-mon[80018]: 2.5 scrub ok
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:15:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:14.683+0000 7fefaad4e140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:14 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Dec 11 09:15:14 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:15:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:14.909+0000 7fefaad4e140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:14 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_support'
Dec 11 09:15:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:14.988+0000 7fefaad4e140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:15:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:15.055+0000 7fefaad4e140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'progress'
Dec 11 09:15:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:15.137+0000 7fefaad4e140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'prometheus'
Dec 11 09:15:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:15.219+0000 7fefaad4e140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mon[80018]: pgmap v97: 193 pgs: 15 peering, 178 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:15 compute-1 ceph-mon[80018]: 4.1d scrub starts
Dec 11 09:15:15 compute-1 ceph-mon[80018]: 4.1d scrub ok
Dec 11 09:15:15 compute-1 ceph-mon[80018]: 2.1c deep-scrub starts
Dec 11 09:15:15 compute-1 ceph-mon[80018]: 2.1c deep-scrub ok
Dec 11 09:15:15 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:15:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:15.578+0000 7fefaad4e140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e33 _set_new_cache_sizes cache_size:1020053089 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'restful'
Dec 11 09:15:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:15.679+0000 7fefaad4e140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:15 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 11 09:15:15 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 11 09:15:15 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e34 e34: 3 total, 2 up, 3 in
Dec 11 09:15:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rgw'
Dec 11 09:15:16 compute-1 ceph-mgr[80326]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:16 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rook'
Dec 11 09:15:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:16.263+0000 7fefaad4e140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='client.14250 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:16 compute-1 ceph-mon[80018]: Saving service node-exporter spec with placement *
Dec 11 09:15:16 compute-1 ceph-mon[80018]: Saving service grafana spec with placement compute-0;count:1
Dec 11 09:15:16 compute-1 ceph-mon[80018]: Saving service prometheus spec with placement compute-0;count:1
Dec 11 09:15:16 compute-1 ceph-mon[80018]: Saving service alertmanager spec with placement compute-0;count:1
Dec 11 09:15:16 compute-1 ceph-mon[80018]: 6.c scrub starts
Dec 11 09:15:16 compute-1 ceph-mon[80018]: 6.c scrub ok
Dec 11 09:15:16 compute-1 ceph-mon[80018]: 2.b scrub starts
Dec 11 09:15:16 compute-1 ceph-mon[80018]: 2.b scrub ok
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='client.? 192.168.122.102:0/1606460407' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]: dispatch
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]: dispatch
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "dff00437-d089-48b8-a12a-b56f6f1647c7"}]': finished
Dec 11 09:15:16 compute-1 ceph-mon[80018]: osdmap e34: 3 total, 2 up, 3 in
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/3760831286' entity='client.admin' 
Dec 11 09:15:16 compute-1 ceph-mon[80018]: from='client.? 192.168.122.102:0/1631062346' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec 11 09:15:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.1e deep-scrub starts
Dec 11 09:15:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.1e deep-scrub ok
Dec 11 09:15:16 compute-1 ceph-mgr[80326]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:16 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'selftest'
Dec 11 09:15:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:16.869+0000 7fefaad4e140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:16 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e35 e35: 3 total, 2 up, 3 in
Dec 11 09:15:16 compute-1 ceph-mgr[80326]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:16 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:15:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:16.964+0000 7fefaad4e140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'stats'
Dec 11 09:15:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:17.058+0000 7fefaad4e140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'status'
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:17.216+0000 7fefaad4e140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telegraf'
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telemetry'
Dec 11 09:15:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:17.290+0000 7fefaad4e140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:17.451+0000 7fefaad4e140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:15:17 compute-1 ceph-mon[80018]: pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:17 compute-1 ceph-mon[80018]: 3.8 scrub starts
Dec 11 09:15:17 compute-1 ceph-mon[80018]: 3.8 scrub ok
Dec 11 09:15:17 compute-1 ceph-mon[80018]: 6.1e deep-scrub starts
Dec 11 09:15:17 compute-1 ceph-mon[80018]: 6.1e deep-scrub ok
Dec 11 09:15:17 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:17 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:15:17 compute-1 ceph-mon[80018]: osdmap e35: 3 total, 2 up, 3 in
Dec 11 09:15:17 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:17 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1303046427' entity='client.admin' 
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'volumes'
Dec 11 09:15:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:17.702+0000 7fefaad4e140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:17 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 11 09:15:17 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 11 09:15:18 compute-1 ceph-mgr[80326]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:18 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'zabbix'
Dec 11 09:15:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:18.017+0000 7fefaad4e140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:18 compute-1 ceph-mgr[80326]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:18.092+0000 7fefaad4e140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:18 compute-1 ceph-mgr[80326]: ms_deliver_dispatch: unhandled message 0x559ddfa00d00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.13( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549578667s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.621475220s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.13( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549548149s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.621475220s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.10( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549601555s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.621582031s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.10( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549555779s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.621582031s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.14( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549561501s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.621856689s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.14( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549548149s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.621856689s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.a( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549311638s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.621887207s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.1d( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.548810005s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.621391296s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.a( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549289703s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.621887207s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.1d( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.548791885s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.621391296s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.8( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549286842s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.621971130s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.8( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549220085s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.621971130s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.b( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549194336s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.622032166s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.b( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.549169540s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.622032166s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.9( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.554039955s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.626930237s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.9( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.554010391s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.626930237s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.6( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553853989s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.626831055s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.6( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553825378s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.626831055s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.e( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553581238s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.626655579s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.e( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553565979s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.626655579s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.4( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553749084s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.626876831s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.4( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553730011s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.626876831s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.3( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553932190s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.627143860s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.3( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553918839s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.627143860s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.2( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553748131s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.627067566s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.2( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553732872s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.627067566s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553956985s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.627426147s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553944588s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.627426147s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.1e( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553792953s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.627296448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.1e( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553776741s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.627296448s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.18( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553723335s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.627494812s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.18( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553700447s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.627494812s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.1b( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553718567s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 active pruub 74.627540588s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 35 pg[7.1b( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=35 pruub=15.553697586s) [1] r=-1 lpr=35 pi=[32,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.627540588s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:18 compute-1 ceph-mon[80018]: 4.f scrub starts
Dec 11 09:15:18 compute-1 ceph-mon[80018]: 4.f scrub ok
Dec 11 09:15:18 compute-1 ceph-mon[80018]: mgrmap e10: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn
Dec 11 09:15:18 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:18 compute-1 ceph-mon[80018]: 5.1f scrub starts
Dec 11 09:15:18 compute-1 ceph-mon[80018]: 5.1f scrub ok
Dec 11 09:15:18 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:18 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2048146887' entity='client.admin' 
Dec 11 09:15:18 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e36 e36: 3 total, 2 up, 3 in
Dec 11 09:15:18 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec 11 09:15:18 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec 11 09:15:19 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 11 09:15:19 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 11 09:15:20 compute-1 ceph-mon[80018]: pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:20 compute-1 ceph-mon[80018]: 4.3 scrub starts
Dec 11 09:15:20 compute-1 ceph-mon[80018]: 4.3 scrub ok
Dec 11 09:15:20 compute-1 ceph-mon[80018]: osdmap e36: 3 total, 2 up, 3 in
Dec 11 09:15:20 compute-1 ceph-mon[80018]: mgrmap e11: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:20 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:20 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:20 compute-1 ceph-mon[80018]: 6.1c scrub starts
Dec 11 09:15:20 compute-1 ceph-mon[80018]: 6.1c scrub ok
Dec 11 09:15:20 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:20 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:20 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054708 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:20 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec 11 09:15:20 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec 11 09:15:21 compute-1 sudo[80381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhajlbcjutnbexsvqugtykgwitfmytxf ; /usr/bin/python3'
Dec 11 09:15:21 compute-1 sudo[80381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 3.4 scrub starts
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 3.4 scrub ok
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 5.11 scrub starts
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 5.11 scrub ok
Dec 11 09:15:21 compute-1 ceph-mon[80018]: pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 6.1 scrub starts
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 6.1 scrub ok
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 6.12 scrub starts
Dec 11 09:15:21 compute-1 ceph-mon[80018]: 6.12 scrub ok
Dec 11 09:15:21 compute-1 ceph-mon[80018]: from='client.? ' entity='client.admin' 
Dec 11 09:15:21 compute-1 python3[80383]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 11 09:15:21 compute-1 sudo[80381]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:21 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 11 09:15:21 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 11 09:15:22 compute-1 ceph-mon[80018]: 4.4 deep-scrub starts
Dec 11 09:15:22 compute-1 ceph-mon[80018]: 4.4 deep-scrub ok
Dec 11 09:15:22 compute-1 ceph-mon[80018]: 5.10 scrub starts
Dec 11 09:15:22 compute-1 ceph-mon[80018]: 5.10 scrub ok
Dec 11 09:15:22 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 11 09:15:22 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 11 09:15:23 compute-1 ceph-mon[80018]: pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:23 compute-1 ceph-mon[80018]: 5.5 scrub starts
Dec 11 09:15:23 compute-1 ceph-mon[80018]: 5.5 scrub ok
Dec 11 09:15:23 compute-1 ceph-mon[80018]: 3.14 scrub starts
Dec 11 09:15:23 compute-1 ceph-mon[80018]: 3.14 scrub ok
Dec 11 09:15:23 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2112496974' entity='client.admin' 
Dec 11 09:15:23 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 11 09:15:23 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:23 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.17 deep-scrub starts
Dec 11 09:15:23 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.17 deep-scrub ok
Dec 11 09:15:24 compute-1 ceph-mon[80018]: 6.6 scrub starts
Dec 11 09:15:24 compute-1 ceph-mon[80018]: 6.6 scrub ok
Dec 11 09:15:24 compute-1 ceph-mon[80018]: Deploying daemon osd.2 on compute-2
Dec 11 09:15:24 compute-1 ceph-mon[80018]: 6.17 deep-scrub starts
Dec 11 09:15:24 compute-1 ceph-mon[80018]: 6.17 deep-scrub ok
Dec 11 09:15:24 compute-1 ceph-mon[80018]: pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:24 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4034639147' entity='client.admin' 
Dec 11 09:15:24 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Dec 11 09:15:24 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Dec 11 09:15:25 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:25 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 11 09:15:25 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 11 09:15:25 compute-1 ceph-mon[80018]: 3.2 scrub starts
Dec 11 09:15:25 compute-1 ceph-mon[80018]: 3.2 scrub ok
Dec 11 09:15:25 compute-1 ceph-mon[80018]: 4.15 deep-scrub starts
Dec 11 09:15:25 compute-1 ceph-mon[80018]: 4.15 deep-scrub ok
Dec 11 09:15:25 compute-1 ceph-mon[80018]: from='mgr.14124 192.168.122.100:0/1266408288' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:25 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/3543000902' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:26 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 11 09:15:26 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 11 09:15:26 compute-1 ceph-mon[80018]: 4.6 scrub starts
Dec 11 09:15:26 compute-1 ceph-mon[80018]: 4.6 scrub ok
Dec 11 09:15:26 compute-1 ceph-mon[80018]: 5.15 scrub starts
Dec 11 09:15:26 compute-1 ceph-mon[80018]: 5.15 scrub ok
Dec 11 09:15:26 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/3543000902' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 11 09:15:26 compute-1 ceph-mon[80018]: mgrmap e12: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:26 compute-1 ceph-mon[80018]: pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:26 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1987057205' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  1: '-n'
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  2: 'mgr.compute-1.unesvp'
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  3: '-f'
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  4: '--setuser'
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  5: 'ceph'
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr respawn  6: '--setgroup'
Dec 11 09:15:27 compute-1 sshd-session[72550]: Connection closed by 192.168.122.100 port 34702
Dec 11 09:15:27 compute-1 sshd-session[72549]: Connection closed by 192.168.122.100 port 34690
Dec 11 09:15:27 compute-1 sshd-session[72782]: Connection closed by 192.168.122.100 port 34780
Dec 11 09:15:27 compute-1 sshd-session[72753]: Connection closed by 192.168.122.100 port 34770
Dec 11 09:15:27 compute-1 sshd-session[72695]: Connection closed by 192.168.122.100 port 34744
Dec 11 09:15:27 compute-1 sshd-session[72809]: Connection closed by 192.168.122.100 port 34784
Dec 11 09:15:27 compute-1 sshd-session[72666]: Connection closed by 192.168.122.100 port 34742
Dec 11 09:15:27 compute-1 sshd-session[72608]: Connection closed by 192.168.122.100 port 34724
Dec 11 09:15:27 compute-1 sshd-session[72838]: Connection closed by 192.168.122.100 port 34796
Dec 11 09:15:27 compute-1 sshd-session[72637]: Connection closed by 192.168.122.100 port 34726
Dec 11 09:15:27 compute-1 sshd-session[72724]: Connection closed by 192.168.122.100 port 34760
Dec 11 09:15:27 compute-1 sshd-session[72579]: Connection closed by 192.168.122.100 port 34710
Dec 11 09:15:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setuser ceph since I am not root
Dec 11 09:15:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setgroup ceph since I am not root
Dec 11 09:15:27 compute-1 sshd-session[72779]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72634]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72692]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72526]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72835]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72750]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72806]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72536]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 sshd-session[72663]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 sshd-session[72576]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 systemd[1]: session-22.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 sshd-session[72605]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 sshd-session[72721]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:27 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:15:27 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:27 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 systemd[1]: session-20.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Dec 11 09:15:27 compute-1 systemd[1]: session-32.scope: Consumed 57.224s CPU time.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 25 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 32 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 30 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 22 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 26 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 28 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 23 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 24 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 27 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 31 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 20 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Session 29 logged out. Waiting for processes to exit.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 30.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 25.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 22.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 29.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 31.
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'alerts'
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 27.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 23.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 24.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 26.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 20.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 28.
Dec 11 09:15:27 compute-1 systemd-logind[791]: Removed session 32.
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'balancer'
Dec 11 09:15:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:27.501+0000 7fd980980140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'cephadm'
Dec 11 09:15:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:27.580+0000 7fd980980140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:27 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 11 09:15:27 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 11 09:15:27 compute-1 ceph-mon[80018]: 3.1 scrub starts
Dec 11 09:15:27 compute-1 ceph-mon[80018]: 3.1 scrub ok
Dec 11 09:15:27 compute-1 ceph-mon[80018]: 3.13 scrub starts
Dec 11 09:15:27 compute-1 ceph-mon[80018]: 3.13 scrub ok
Dec 11 09:15:27 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/1987057205' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 11 09:15:27 compute-1 ceph-mon[80018]: mgrmap e13: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:27 compute-1 ceph-mon[80018]: 6.4 scrub starts
Dec 11 09:15:27 compute-1 ceph-mon[80018]: 6.4 scrub ok
Dec 11 09:15:28 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'crash'
Dec 11 09:15:28 compute-1 ceph-mgr[80326]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:28.447+0000 7fd980980140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:28 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'dashboard'
Dec 11 09:15:28 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.10 deep-scrub starts
Dec 11 09:15:28 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.10 deep-scrub ok
Dec 11 09:15:28 compute-1 ceph-mon[80018]: 4.13 scrub starts
Dec 11 09:15:28 compute-1 ceph-mon[80018]: 4.13 scrub ok
Dec 11 09:15:28 compute-1 ceph-mon[80018]: 4.2 scrub starts
Dec 11 09:15:28 compute-1 ceph-mon[80018]: 4.2 scrub ok
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:15:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:29.166+0000 7fd980980140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:15:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:15:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]:   from numpy import show_config as show_numpy_config
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:29.341+0000 7fd980980140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'influx'
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:29.426+0000 7fd980980140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'insights'
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'iostat'
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:15:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:29.590+0000 7fd980980140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:29 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 11 09:15:29 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 11 09:15:29 compute-1 ceph-mon[80018]: 3.10 deep-scrub starts
Dec 11 09:15:29 compute-1 ceph-mon[80018]: 3.10 deep-scrub ok
Dec 11 09:15:29 compute-1 ceph-mon[80018]: 6.0 scrub starts
Dec 11 09:15:29 compute-1 ceph-mon[80018]: 6.0 scrub ok
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'localpool'
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mirroring'
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'nfs'
Dec 11 09:15:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:15:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:30.702+0000 7fd980980140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:30 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 11 09:15:30 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:30 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:15:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:30.948+0000 7fd980980140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mon[80018]: 6.15 scrub starts
Dec 11 09:15:31 compute-1 ceph-mon[80018]: 6.15 scrub ok
Dec 11 09:15:31 compute-1 ceph-mon[80018]: 5.3 scrub starts
Dec 11 09:15:31 compute-1 ceph-mon[80018]: 5.3 scrub ok
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_support'
Dec 11 09:15:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:31.030+0000 7fd980980140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:15:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:31.099+0000 7fd980980140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:31.180+0000 7fd980980140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'progress'
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'prometheus'
Dec 11 09:15:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:31.252+0000 7fd980980140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:31.599+0000 7fd980980140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'restful'
Dec 11 09:15:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:31.698+0000 7fd980980140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:31 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 11 09:15:31 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 11 09:15:31 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rgw'
Dec 11 09:15:32 compute-1 ceph-mon[80018]: 5.16 scrub starts
Dec 11 09:15:32 compute-1 ceph-mon[80018]: 5.16 scrub ok
Dec 11 09:15:32 compute-1 ceph-mon[80018]: from='osd.2 [v2:192.168.122.102:6800/1051349850,v1:192.168.122.102:6801/1051349850]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 11 09:15:32 compute-1 ceph-mon[80018]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec 11 09:15:32 compute-1 ceph-mon[80018]: 3.6 scrub starts
Dec 11 09:15:32 compute-1 ceph-mon[80018]: 3.6 scrub ok
Dec 11 09:15:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e37 e37: 3 total, 2 up, 3 in
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rook'
Dec 11 09:15:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:32.177+0000 7fd980980140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 11 09:15:32 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:32.762+0000 7fd980980140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'selftest'
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:32.836+0000 7fd980980140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'stats'
Dec 11 09:15:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:32.919+0000 7fd980980140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:32 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'status'
Dec 11 09:15:33 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e38 e38: 3 total, 2 up, 3 in
Dec 11 09:15:33 compute-1 ceph-mon[80018]: 3.11 scrub starts
Dec 11 09:15:33 compute-1 ceph-mon[80018]: 3.11 scrub ok
Dec 11 09:15:33 compute-1 ceph-mon[80018]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 11 09:15:33 compute-1 ceph-mon[80018]: osdmap e37: 3 total, 2 up, 3 in
Dec 11 09:15:33 compute-1 ceph-mon[80018]: from='osd.2 [v2:192.168.122.102:6800/1051349850,v1:192.168.122.102:6801/1051349850]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 11 09:15:33 compute-1 ceph-mon[80018]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec 11 09:15:33 compute-1 ceph-mon[80018]: 5.0 deep-scrub starts
Dec 11 09:15:33 compute-1 ceph-mon[80018]: 5.0 deep-scrub ok
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:33.073+0000 7fd980980140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telegraf'
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telemetry'
Dec 11 09:15:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:33.144+0000 7fd980980140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:15:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:33.310+0000 7fd980980140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 systemd[72530]: Starting Mark boot as successful...
Dec 11 09:15:33 compute-1 systemd[72530]: Finished Mark boot as successful.
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'volumes'
Dec 11 09:15:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:33.548+0000 7fd980980140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.198016167s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621490479s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.198016167s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621490479s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197721481s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621315002s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.470692635s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 82.894317627s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.470692635s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894317627s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197721481s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621315002s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197620392s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621543884s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197620392s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621543884s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.15( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197823524s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621879578s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.15( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197823524s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621879578s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.11( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197790146s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621879578s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.11( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197790146s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621879578s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.1f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197169304s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621307373s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.1f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197169304s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621307373s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197546959s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621833801s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.194591522s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.618888855s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197546959s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621833801s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.194591522s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618888855s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.470147133s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 82.894577026s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.470147133s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894577026s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197238922s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621795654s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.11( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197680473s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.622245789s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197238922s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621795654s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.11( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197680473s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622245789s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.16( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197209358s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.621864319s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.16( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197209358s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621864319s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197494507s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.622299194s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197494507s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622299194s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197617531s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.622474670s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.469572067s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 82.894439697s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197617531s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622474670s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.469572067s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894439697s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.e( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197600365s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.622207642s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.e( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.197600365s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622207642s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.469006538s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 82.894134521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[5.4( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201630592s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.626762390s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[5.4( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201630592s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.626762390s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.469006538s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894134521s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.5( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201795578s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.627059937s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[7.5( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201795578s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627059937s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.468770981s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 82.894126892s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.468770981s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894126892s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201814651s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.627296448s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201814651s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627296448s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.9( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201760292s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.627273560s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.9( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201760292s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627273560s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[5.e( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201800346s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.627410889s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[5.e( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201800346s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627410889s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.1a( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201764107s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.627510071s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.1a( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201764107s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627510071s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.1d( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201784134s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.627601624s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.468299866s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 82.894134521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[3.1d( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201784134s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627601624s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.468299866s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894134521s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[5.1a( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201695442s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 active pruub 82.627639771s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.464394569s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 active pruub 82.890365601s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[5.1a( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=38 pruub=8.201695442s) [] r=-1 lpr=38 pi=[32,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627639771s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 38 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=38 pruub=8.464394569s) [] r=-1 lpr=38 pi=[26,38)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.890365601s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:33 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 11 09:15:33 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'zabbix'
Dec 11 09:15:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:33.847+0000 7fd980980140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:33.920+0000 7fd980980140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: mgr load Constructed class from module: dashboard
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: [dashboard INFO root] Starting engine...
Dec 11 09:15:33 compute-1 ceph-mgr[80326]: ms_deliver_dispatch: unhandled message 0x5617d9b25860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 11 09:15:34 compute-1 ceph-mgr[80326]: [dashboard INFO root] Engine started...
Dec 11 09:15:34 compute-1 ceph-mon[80018]: 3.e scrub starts
Dec 11 09:15:34 compute-1 ceph-mon[80018]: 3.e scrub ok
Dec 11 09:15:34 compute-1 ceph-mon[80018]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec 11 09:15:34 compute-1 ceph-mon[80018]: osdmap e38: 3 total, 2 up, 3 in
Dec 11 09:15:34 compute-1 ceph-mon[80018]: 3.7 scrub starts
Dec 11 09:15:34 compute-1 ceph-mon[80018]: 3.7 scrub ok
Dec 11 09:15:34 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:34 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:34 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:34 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:34 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e39 e39: 3 total, 2 up, 3 in
Dec 11 09:15:34 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 11 09:15:34 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 11 09:15:35 compute-1 ceph-mon[80018]: purged_snaps scrub starts
Dec 11 09:15:35 compute-1 ceph-mon[80018]: purged_snaps scrub ok
Dec 11 09:15:35 compute-1 ceph-mon[80018]: 5.9 scrub starts
Dec 11 09:15:35 compute-1 ceph-mon[80018]: 5.9 scrub ok
Dec 11 09:15:35 compute-1 ceph-mon[80018]: mgrmap e14: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:35 compute-1 ceph-mon[80018]: 4.0 scrub starts
Dec 11 09:15:35 compute-1 ceph-mon[80018]: 4.0 scrub ok
Dec 11 09:15:35 compute-1 ceph-mon[80018]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:35 compute-1 ceph-mon[80018]: Activating manager daemon compute-0.wwpcae
Dec 11 09:15:35 compute-1 ceph-mon[80018]: osdmap e39: 3 total, 2 up, 3 in
Dec 11 09:15:35 compute-1 ceph-mon[80018]: mgrmap e15: compute-0.wwpcae(active, starting, since 0.0440277s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:15:35 compute-1 ceph-mon[80018]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:15:35 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:15:35 compute-1 sshd-session[80443]: Accepted publickey for ceph-admin from 192.168.122.100 port 43626 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:15:35 compute-1 systemd-logind[791]: New session 33 of user ceph-admin.
Dec 11 09:15:35 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Dec 11 09:15:35 compute-1 sshd-session[80443]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:15:35 compute-1 sudo[80447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:35 compute-1 sudo[80447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:35 compute-1 sudo[80447]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:35 compute-1 sudo[80472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:15:35 compute-1 sudo[80472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:35 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:35 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 11 09:15:35 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 11 09:15:36 compute-1 podman[80569]: 2025-12-11 09:15:36.51883378 +0000 UTC m=+0.476145882 container exec 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:15:36 compute-1 podman[80569]: 2025-12-11 09:15:36.61169022 +0000 UTC m=+0.569002322 container exec_died 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:15:36 compute-1 ceph-mon[80018]: 3.c scrub starts
Dec 11 09:15:36 compute-1 ceph-mon[80018]: 3.c scrub ok
Dec 11 09:15:36 compute-1 ceph-mon[80018]: 4.7 scrub starts
Dec 11 09:15:36 compute-1 ceph-mon[80018]: 4.7 scrub ok
Dec 11 09:15:36 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 11 09:15:36 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 11 09:15:36 compute-1 sudo[80472]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:36 compute-1 sudo[80650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:36 compute-1 sudo[80650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:36 compute-1 sudo[80650]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:37 compute-1 sudo[80675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:15:37 compute-1 sudo[80675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:37 compute-1 sudo[80675]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:37 compute-1 ceph-mon[80018]: 6.a scrub starts
Dec 11 09:15:37 compute-1 ceph-mon[80018]: 6.a scrub ok
Dec 11 09:15:37 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:36] ENGINE Bus STARTING
Dec 11 09:15:37 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:36] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:15:37 compute-1 ceph-mon[80018]: 5.6 scrub starts
Dec 11 09:15:37 compute-1 ceph-mon[80018]: 5.6 scrub ok
Dec 11 09:15:37 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:36] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:15:37 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:36] ENGINE Bus STARTED
Dec 11 09:15:37 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:36] ENGINE Client ('192.168.122.100', 45858) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:15:37 compute-1 ceph-mon[80018]: from='client.14319 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:37 compute-1 ceph-mon[80018]: mgrmap e16: compute-0.wwpcae(active, since 1.90983s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:37 compute-1 ceph-mon[80018]: pgmap v3: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:37 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:37 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-1 ceph-mon[80018]: pgmap v4: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:37 compute-1 ceph-mon[80018]: 6.8 scrub starts
Dec 11 09:15:37 compute-1 ceph-mon[80018]: 6.8 scrub ok
Dec 11 09:15:37 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:37 compute-1 sudo[80730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:37 compute-1 sudo[80730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:37 compute-1 sudo[80730]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:37 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 11 09:15:37 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 11 09:15:37 compute-1 sudo[80755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:15:37 compute-1 sudo[80755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:38 compute-1 sudo[80755]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:38 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 11 09:15:38 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 11 09:15:38 compute-1 ceph-mon[80018]: 5.c scrub starts
Dec 11 09:15:38 compute-1 ceph-mon[80018]: 5.c scrub ok
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:38 compute-1 ceph-mon[80018]: mgrmap e17: compute-0.wwpcae(active, since 3s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:38 compute-1 ceph-mon[80018]: 3.f scrub starts
Dec 11 09:15:38 compute-1 ceph-mon[80018]: 3.f scrub ok
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='client.14349 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-1 ceph-mon[80018]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:38 compute-1 ceph-mon[80018]: Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:38 compute-1 ceph-mon[80018]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:15:38 compute-1 ceph-mon[80018]: Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:38 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:39 compute-1 sudo[80799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:15:39 compute-1 sudo[80799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-1 sudo[80799]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-1 sudo[80824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:15:39 compute-1 sudo[80824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-1 sudo[80824]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 11 09:15:39 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 11 09:15:39 compute-1 sudo[80849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:39 compute-1 sudo[80849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Dec 11 09:15:39 compute-1 sudo[80849]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114064693s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621307373s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114029646s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621307373s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.111587763s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618888855s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.111553431s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.618888855s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113944054s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621315002s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.386927128s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894317627s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.386896849s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894317627s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113903046s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621315002s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.12( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113991976s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621543884s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.12( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113975763s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621543884s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.11( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114147186s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621879578s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.11( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114134550s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621879578s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113969326s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621795654s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113977671s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621833801s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.15( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113964319s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621833801s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.16( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113929749s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621864319s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.16( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113919735s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621864319s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113920689s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621879578s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113904953s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621879578s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.386524916s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894577026s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.12( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.386512756s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894577026s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.11( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114094019s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622245789s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=32/33 n=0 ec=30/20 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.113937140s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621795654s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.e( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114033937s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622207642s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.11( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114075422s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622245789s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.e( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114022493s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622207642s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114028454s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622299194s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.9( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114018679s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622299194s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.386100769s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894439697s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114104509s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622474670s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.386085272s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894439697s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.8( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.114094257s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.622474670s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.385624409s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894134521s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.385610580s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894134521s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118218422s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.626762390s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118206501s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.626762390s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.5( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118393421s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627059937s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[7.5( empty local-lis/les=32/33 n=0 ec=32/21 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118383169s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627059937s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.385349274s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894126892s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118473768s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627296448s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.5( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.385322571s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894126892s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.1( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118453264s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627296448s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.112579823s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621490479s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118324041s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627273560s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[5.e( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118393898s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627410889s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[5.e( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118372202s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627410889s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.1a( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118447065s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627510071s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[4.1f( empty local-lis/les=32/33 n=0 ec=28/17 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.112550735s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.621490479s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.1a( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118428469s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627510071s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.1d( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118419647s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627601624s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.384936094s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894134521s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.1d( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118401051s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627601624s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=32/33 n=0 ec=28/15 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118237257s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627273560s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.384917021s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.894134521s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[5.1a( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118388176s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627639771s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[5.1a( empty local-lis/les=32/33 n=0 ec=30/19 lis/c=32/32 les/c/f=33/33/0 sis=40 pruub=2.118376017s) [2] r=-1 lpr=40 pi=[32,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.627639771s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.381042719s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.890365601s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:15:39 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=26/27 n=0 ec=26/13 lis/c=26/26 les/c/f=27/27/0 sis=40 pruub=2.381027937s) [2] r=-1 lpr=40 pi=[26,40)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 82.890365601s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:15:39 compute-1 sudo[80874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:39 compute-1 sudo[80874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-1 sudo[80874]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:39 compute-1 sudo[80899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:39 compute-1 sudo[80899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:39 compute-1 ceph-mon[80018]: 6.f deep-scrub starts
Dec 11 09:15:39 compute-1 ceph-mon[80018]: 6.f deep-scrub ok
Dec 11 09:15:39 compute-1 ceph-mon[80018]: pgmap v5: 193 pgs: 136 active+clean, 57 unknown; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Dec 11 09:15:39 compute-1 ceph-mon[80018]: 4.a scrub starts
Dec 11 09:15:39 compute-1 ceph-mon[80018]: 4.a scrub ok
Dec 11 09:15:39 compute-1 ceph-mon[80018]: OSD bench result of 5714.565967 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='client.14355 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-1 ceph-mon[80018]: mgrmap e18: compute-0.wwpcae(active, since 4s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:39 compute-1 ceph-mon[80018]: osd.2 [v2:192.168.122.102:6800/1051349850,v1:192.168.122.102:6801/1051349850] boot
Dec 11 09:15:39 compute-1 ceph-mon[80018]: osdmap e40: 3 total, 3 up, 3 in
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:39 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:39 compute-1 sudo[80899]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[80947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:40 compute-1 sudo[80947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[80947]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[80972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:15:40 compute-1 sudo[80972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[80972]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[80997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:15:40 compute-1 sudo[80997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[80997]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:40 compute-1 sudo[81022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81022]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:40 compute-1 sudo[81047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81047]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-1 sudo[81072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81072]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:40 compute-1 sudo[81097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81097]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-1 sudo[81122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81122]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-1 sudo[81170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81170]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:15:40 compute-1 sudo[81195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81195]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 11 09:15:40 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:40 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 11 09:15:40 compute-1 sudo[81220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-1 sudo[81220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81220]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Dec 11 09:15:40 compute-1 sudo[81245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:15:40 compute-1 sudo[81245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81245]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 sudo[81270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:15:40 compute-1 sudo[81270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81270]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:40 compute-1 ceph-mon[80018]: 3.b scrub starts
Dec 11 09:15:40 compute-1 ceph-mon[80018]: 3.b scrub ok
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:15:40 compute-1 ceph-mon[80018]: 3.d scrub starts
Dec 11 09:15:40 compute-1 ceph-mon[80018]: 3.d scrub ok
Dec 11 09:15:40 compute-1 ceph-mon[80018]: from='client.14361 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-1 ceph-mon[80018]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:15:40 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:40 compute-1 ceph-mon[80018]: osdmap e41: 3 total, 3 up, 3 in
Dec 11 09:15:40 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:15:40 compute-1 sudo[81295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:40 compute-1 sudo[81295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:40 compute-1 sudo[81295]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:41 compute-1 sudo[81320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81320]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-1 sudo[81345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81345]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-1 sudo[81393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81393]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-1 sudo[81418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81418]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-1 sudo[81443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81443]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:41 compute-1 sudo[81468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81468]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:15:41 compute-1 sudo[81493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81493]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-1 sudo[81518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81518]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:15:41 compute-1 sudo[81543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81543]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-1 sudo[81568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81568]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec 11 09:15:41 compute-1 ceph-mon[80018]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2447536963' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:41 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 11 09:15:41 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 11 09:15:41 compute-1 sudo[81616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-1 sudo[81616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81616]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 sudo[81641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:15:41 compute-1 sudo[81641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81641]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 ceph-mon[80018]: 4.b scrub starts
Dec 11 09:15:41 compute-1 ceph-mon[80018]: 4.b scrub ok
Dec 11 09:15:41 compute-1 ceph-mon[80018]: pgmap v7: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:15:41 compute-1 ceph-mon[80018]: 4.d scrub starts
Dec 11 09:15:41 compute-1 ceph-mon[80018]: 4.d scrub ok
Dec 11 09:15:41 compute-1 ceph-mon[80018]: from='client.14367 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:15:41 compute-1 ceph-mon[80018]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-1 ceph-mon[80018]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:15:41 compute-1 ceph-mon[80018]: 3.9 scrub starts
Dec 11 09:15:41 compute-1 ceph-mon[80018]: 3.9 scrub ok
Dec 11 09:15:41 compute-1 ceph-mon[80018]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-1 ceph-mon[80018]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2447536963' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:41 compute-1 ceph-mon[80018]: from='client.? ' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec 11 09:15:41 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:41 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:41 compute-1 sudo[81666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:15:41 compute-1 sudo[81666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:41 compute-1 sudo[81666]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  1: '-n'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  2: 'mgr.compute-1.unesvp'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  3: '-f'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  4: '--setuser'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  5: 'ceph'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  6: '--setgroup'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  7: 'ceph'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  8: '--default-log-to-file=false'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  9: '--default-log-to-journald=true'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 11 09:15:41 compute-1 ceph-mgr[80326]: mgr respawn  exe_path /proc/self/exe
Dec 11 09:15:42 compute-1 sshd-session[80446]: Connection closed by 192.168.122.100 port 43626
Dec 11 09:15:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setuser ceph since I am not root
Dec 11 09:15:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setgroup ceph since I am not root
Dec 11 09:15:42 compute-1 sshd-session[80443]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:15:42 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Dec 11 09:15:42 compute-1 systemd[1]: session-33.scope: Consumed 4.796s CPU time.
Dec 11 09:15:42 compute-1 systemd-logind[791]: Session 33 logged out. Waiting for processes to exit.
Dec 11 09:15:42 compute-1 systemd-logind[791]: Removed session 33.
Dec 11 09:15:42 compute-1 ceph-mgr[80326]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:15:42 compute-1 ceph-mgr[80326]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:42 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'alerts'
Dec 11 09:15:42 compute-1 ceph-mgr[80326]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'balancer'
Dec 11 09:15:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:42.193+0000 7f087f732140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-1 ceph-mgr[80326]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'cephadm'
Dec 11 09:15:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:42.298+0000 7f087f732140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:42 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 11 09:15:42 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 11 09:15:42 compute-1 ceph-mon[80018]: 5.a scrub starts
Dec 11 09:15:42 compute-1 ceph-mon[80018]: 5.a scrub ok
Dec 11 09:15:42 compute-1 ceph-mon[80018]: 6.7 scrub starts
Dec 11 09:15:42 compute-1 ceph-mon[80018]: 6.7 scrub ok
Dec 11 09:15:42 compute-1 ceph-mon[80018]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec 11 09:15:42 compute-1 ceph-mon[80018]: mgrmap e19: compute-0.wwpcae(active, since 7s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:42 compute-1 ceph-mon[80018]: from='mgr.14313 192.168.122.100:0/1097182915' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:42 compute-1 ceph-mon[80018]: 5.e scrub starts
Dec 11 09:15:42 compute-1 ceph-mon[80018]: 5.e scrub ok
Dec 11 09:15:43 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'crash'
Dec 11 09:15:43 compute-1 ceph-mgr[80326]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:43.151+0000 7f087f732140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:43 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'dashboard'
Dec 11 09:15:43 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 11 09:15:43 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 11 09:15:43 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:15:43 compute-1 ceph-mgr[80326]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:43.826+0000 7f087f732140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:43 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 6.9 scrub starts
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 6.9 scrub ok
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 4.5 scrub starts
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 4.5 scrub ok
Dec 11 09:15:43 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2032173256' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 4.1 scrub starts
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 4.1 scrub ok
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 6.b scrub starts
Dec 11 09:15:43 compute-1 ceph-mon[80018]: 6.b scrub ok
Dec 11 09:15:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:15:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:15:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]:   from numpy import show_config as show_numpy_config
Dec 11 09:15:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:43.999+0000 7f087f732140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'influx'
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:44.082+0000 7f087f732140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'insights'
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'iostat'
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:44.230+0000 7f087f732140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'localpool'
Dec 11 09:15:44 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 11 09:15:44 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:15:44 compute-1 ceph-mon[80018]: 5.7 scrub starts
Dec 11 09:15:44 compute-1 ceph-mon[80018]: 5.7 scrub ok
Dec 11 09:15:44 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/2032173256' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec 11 09:15:44 compute-1 ceph-mon[80018]: mgrmap e20: compute-0.wwpcae(active, since 9s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:44 compute-1 ceph-mon[80018]: 5.1a scrub starts
Dec 11 09:15:44 compute-1 ceph-mon[80018]: 5.1a scrub ok
Dec 11 09:15:44 compute-1 ceph-mon[80018]: 4.17 scrub starts
Dec 11 09:15:44 compute-1 ceph-mon[80018]: 4.17 scrub ok
Dec 11 09:15:44 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mirroring'
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'nfs'
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:45.307+0000 7f087f732140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:45.536+0000 7f087f732140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:45.615+0000 7f087f732140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_support'
Dec 11 09:15:45 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:45 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 11 09:15:45 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:45.684+0000 7f087f732140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:45.764+0000 7f087f732140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'progress'
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'prometheus'
Dec 11 09:15:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:45.845+0000 7f087f732140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:45 compute-1 ceph-mon[80018]: 5.2 scrub starts
Dec 11 09:15:45 compute-1 ceph-mon[80018]: 5.2 scrub ok
Dec 11 09:15:45 compute-1 ceph-mon[80018]: 7.5 scrub starts
Dec 11 09:15:45 compute-1 ceph-mon[80018]: 7.5 scrub ok
Dec 11 09:15:45 compute-1 ceph-mon[80018]: 4.16 scrub starts
Dec 11 09:15:45 compute-1 ceph-mon[80018]: 4.16 scrub ok
Dec 11 09:15:46 compute-1 ceph-mgr[80326]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:46.265+0000 7f087f732140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:15:46 compute-1 ceph-mgr[80326]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:46.362+0000 7f087f732140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'restful'
Dec 11 09:15:46 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rgw'
Dec 11 09:15:46 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 11 09:15:46 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 11 09:15:46 compute-1 ceph-mgr[80326]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:46 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rook'
Dec 11 09:15:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:46.859+0000 7f087f732140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 6.5 scrub starts
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 6.5 scrub ok
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 3.1d deep-scrub starts
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 3.1d deep-scrub ok
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 5.17 scrub starts
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 5.17 scrub ok
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'selftest'
Dec 11 09:15:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:47.471+0000 7f087f732140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:47.552+0000 7f087f732140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:47.640+0000 7f087f732140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'stats'
Dec 11 09:15:47 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 11 09:15:47 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'status'
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:47.799+0000 7f087f732140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telegraf'
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:47.877+0000 7f087f732140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:47 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telemetry'
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 3.3 scrub starts
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 3.3 scrub ok
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 5.4 scrub starts
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 5.4 scrub ok
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 6.14 scrub starts
Dec 11 09:15:47 compute-1 ceph-mon[80018]: 6.14 scrub ok
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:48.046+0000 7f087f732140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:48.298+0000 7f087f732140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'volumes'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:48.583+0000 7f087f732140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'zabbix'
Dec 11 09:15:48 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 11 09:15:48 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:48.658+0000 7f087f732140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: ms_deliver_dispatch: unhandled message 0x5618ed815860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  1: '-n'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  2: 'mgr.compute-1.unesvp'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  3: '-f'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  4: '--setuser'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  5: 'ceph'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  6: '--setgroup'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  7: 'ceph'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  8: '--default-log-to-file=false'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  9: '--default-log-to-journald=true'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setuser ceph since I am not root
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setgroup ceph since I am not root
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: pidfile_write: ignore empty --pid-file
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'alerts'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:48.878+0000 7f6528cdc140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'balancer'
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'cephadm'
Dec 11 09:15:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:48.960+0000 7f6528cdc140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:15:48 compute-1 ceph-mon[80018]: 6.2 scrub starts
Dec 11 09:15:48 compute-1 ceph-mon[80018]: 6.2 scrub ok
Dec 11 09:15:48 compute-1 ceph-mon[80018]: 3.1a deep-scrub starts
Dec 11 09:15:48 compute-1 ceph-mon[80018]: 3.1a deep-scrub ok
Dec 11 09:15:48 compute-1 ceph-mon[80018]: 3.12 scrub starts
Dec 11 09:15:48 compute-1 ceph-mon[80018]: 3.12 scrub ok
Dec 11 09:15:48 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:48 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:48 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:48 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:49 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Dec 11 09:15:49 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 11 09:15:49 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 11 09:15:49 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'crash'
Dec 11 09:15:49 compute-1 ceph-mgr[80326]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:49 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'dashboard'
Dec 11 09:15:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:49.769+0000 7f6528cdc140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-mon[80018]: 3.a scrub starts
Dec 11 09:15:50 compute-1 ceph-mon[80018]: 3.a scrub ok
Dec 11 09:15:50 compute-1 ceph-mon[80018]: mgrmap e21: compute-0.wwpcae(active, since 14s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:50 compute-1 ceph-mon[80018]: 4.8 deep-scrub starts
Dec 11 09:15:50 compute-1 ceph-mon[80018]: 4.8 deep-scrub ok
Dec 11 09:15:50 compute-1 ceph-mon[80018]: 5.14 scrub starts
Dec 11 09:15:50 compute-1 ceph-mon[80018]: 5.14 scrub ok
Dec 11 09:15:50 compute-1 ceph-mon[80018]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:50 compute-1 ceph-mon[80018]: Activating manager daemon compute-0.wwpcae
Dec 11 09:15:50 compute-1 ceph-mon[80018]: osdmap e42: 3 total, 3 up, 3 in
Dec 11 09:15:50 compute-1 ceph-mon[80018]: mgrmap e22: compute-0.wwpcae(active, starting, since 0.0271702s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:15:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:50.367+0000 7f6528cdc140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:15:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:15:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]:   from numpy import show_config as show_numpy_config
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'influx'
Dec 11 09:15:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:50.522+0000 7f6528cdc140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:50.598+0000 7f6528cdc140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'insights'
Dec 11 09:15:50 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 11 09:15:50 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 11 09:15:50 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'iostat'
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:50.735+0000 7f6528cdc140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:15:50 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'localpool'
Dec 11 09:15:51 compute-1 ceph-mon[80018]: 5.1 scrub starts
Dec 11 09:15:51 compute-1 ceph-mon[80018]: 5.1 scrub ok
Dec 11 09:15:51 compute-1 ceph-mon[80018]: 7.1f scrub starts
Dec 11 09:15:51 compute-1 ceph-mon[80018]: 7.1f scrub ok
Dec 11 09:15:51 compute-1 ceph-mon[80018]: 6.16 scrub starts
Dec 11 09:15:51 compute-1 ceph-mon[80018]: 6.16 scrub ok
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mirroring'
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'nfs'
Dec 11 09:15:51 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 11 09:15:51 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:51.699+0000 7f6528cdc140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:15:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:51.917+0000 7f6528cdc140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:51 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_support'
Dec 11 09:15:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:51.998+0000 7f6528cdc140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:15:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:52.068+0000 7f6528cdc140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mon[80018]: 3.5 scrub starts
Dec 11 09:15:52 compute-1 ceph-mon[80018]: 3.5 scrub ok
Dec 11 09:15:52 compute-1 ceph-mon[80018]: 4.9 scrub starts
Dec 11 09:15:52 compute-1 ceph-mon[80018]: 4.9 scrub ok
Dec 11 09:15:52 compute-1 ceph-mon[80018]: 6.11 scrub starts
Dec 11 09:15:52 compute-1 ceph-mon[80018]: 6.11 scrub ok
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'progress'
Dec 11 09:15:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:52.149+0000 7f6528cdc140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'prometheus'
Dec 11 09:15:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:52.220+0000 7f6528cdc140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Dec 11 09:15:52 compute-1 systemd[72530]: Activating special unit Exit the Session...
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped target Main User Target.
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped target Basic System.
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped target Paths.
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped target Sockets.
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped target Timers.
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 11 09:15:52 compute-1 systemd[72530]: Closed D-Bus User Message Bus Socket.
Dec 11 09:15:52 compute-1 systemd[72530]: Stopped Create User's Volatile Files and Directories.
Dec 11 09:15:52 compute-1 systemd[72530]: Removed slice User Application Slice.
Dec 11 09:15:52 compute-1 systemd[72530]: Reached target Shutdown.
Dec 11 09:15:52 compute-1 systemd[72530]: Finished Exit the Session.
Dec 11 09:15:52 compute-1 systemd[72530]: Reached target Exit the Session.
Dec 11 09:15:52 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Dec 11 09:15:52 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Dec 11 09:15:52 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 11 09:15:52 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 11 09:15:52 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 11 09:15:52 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 11 09:15:52 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Dec 11 09:15:52 compute-1 systemd[1]: user-42477.slice: Consumed 1min 3.352s CPU time.
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:15:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:52.578+0000 7f6528cdc140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 11 09:15:52 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:52.679+0000 7f6528cdc140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'restful'
Dec 11 09:15:52 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rgw'
Dec 11 09:15:53 compute-1 ceph-mon[80018]: 5.f scrub starts
Dec 11 09:15:53 compute-1 ceph-mon[80018]: 5.f scrub ok
Dec 11 09:15:53 compute-1 ceph-mon[80018]: 7.11 scrub starts
Dec 11 09:15:53 compute-1 ceph-mon[80018]: 7.11 scrub ok
Dec 11 09:15:53 compute-1 ceph-mon[80018]: 4.12 scrub starts
Dec 11 09:15:53 compute-1 ceph-mon[80018]: 4.12 scrub ok
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rook'
Dec 11 09:15:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:53.122+0000 7f6528cdc140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 11 09:15:53 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:53.669+0000 7f6528cdc140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'selftest'
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:15:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:53.741+0000 7f6528cdc140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:53.820+0000 7f6528cdc140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'stats'
Dec 11 09:15:53 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'status'
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telegraf'
Dec 11 09:15:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:54.009+0000 7f6528cdc140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telemetry'
Dec 11 09:15:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:54.084+0000 7f6528cdc140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mon[80018]: 4.e scrub starts
Dec 11 09:15:54 compute-1 ceph-mon[80018]: 4.e scrub ok
Dec 11 09:15:54 compute-1 ceph-mon[80018]: 3.15 scrub starts
Dec 11 09:15:54 compute-1 ceph-mon[80018]: 3.15 scrub ok
Dec 11 09:15:54 compute-1 ceph-mon[80018]: 6.10 scrub starts
Dec 11 09:15:54 compute-1 ceph-mon[80018]: 6.10 scrub ok
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:15:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:54.242+0000 7f6528cdc140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'volumes'
Dec 11 09:15:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:54.460+0000 7f6528cdc140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Dec 11 09:15:54 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:54.730+0000 7f6528cdc140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'zabbix'
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:15:54.803+0000 7f6528cdc140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: mgr load Constructed class from module: dashboard
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: [dashboard INFO root] Starting engine...
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: ms_deliver_dispatch: unhandled message 0x561ea2877860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 11 09:15:54 compute-1 ceph-mgr[80326]: [dashboard INFO root] Engine started...
Dec 11 09:15:55 compute-1 ceph-mon[80018]: 6.e scrub starts
Dec 11 09:15:55 compute-1 ceph-mon[80018]: 6.e scrub ok
Dec 11 09:15:55 compute-1 ceph-mon[80018]: 7.16 scrub starts
Dec 11 09:15:55 compute-1 ceph-mon[80018]: 7.16 scrub ok
Dec 11 09:15:55 compute-1 ceph-mon[80018]: 6.13 deep-scrub starts
Dec 11 09:15:55 compute-1 ceph-mon[80018]: 6.13 deep-scrub ok
Dec 11 09:15:55 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:15:55 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp started
Dec 11 09:15:55 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 11 09:15:55 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 11 09:15:55 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:15:56 compute-1 ceph-mon[80018]: 4.c deep-scrub starts
Dec 11 09:15:56 compute-1 ceph-mon[80018]: 4.c deep-scrub ok
Dec 11 09:15:56 compute-1 ceph-mon[80018]: mgrmap e23: compute-0.wwpcae(active, starting, since 5s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:15:56 compute-1 ceph-mon[80018]: 7.14 scrub starts
Dec 11 09:15:56 compute-1 ceph-mon[80018]: 7.14 scrub ok
Dec 11 09:15:56 compute-1 ceph-mon[80018]: 5.1e scrub starts
Dec 11 09:15:56 compute-1 ceph-mon[80018]: 5.1e scrub ok
Dec 11 09:15:56 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:15:56 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:15:56 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 11 09:15:56 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 11 09:15:56 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Dec 11 09:15:57 compute-1 ceph-mon[80018]: 6.d scrub starts
Dec 11 09:15:57 compute-1 ceph-mon[80018]: 6.d scrub ok
Dec 11 09:15:57 compute-1 ceph-mon[80018]: mgrmap e24: compute-0.wwpcae(active, starting, since 6s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:57 compute-1 ceph-mon[80018]: 5.12 scrub starts
Dec 11 09:15:57 compute-1 ceph-mon[80018]: 5.12 scrub ok
Dec 11 09:15:57 compute-1 ceph-mon[80018]: 6.1d scrub starts
Dec 11 09:15:57 compute-1 ceph-mon[80018]: 6.1d scrub ok
Dec 11 09:15:57 compute-1 ceph-mon[80018]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:15:57 compute-1 ceph-mon[80018]: Activating manager daemon compute-0.wwpcae
Dec 11 09:15:57 compute-1 ceph-mon[80018]: osdmap e43: 3 total, 3 up, 3 in
Dec 11 09:15:57 compute-1 ceph-mon[80018]: mgrmap e25: compute-0.wwpcae(active, starting, since 0.0371803s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:15:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:15:57 compute-1 sshd-session[81767]: Accepted publickey for ceph-admin from 192.168.122.100 port 54308 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:15:57 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Dec 11 09:15:57 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts
Dec 11 09:15:57 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 11 09:15:57 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.19 deep-scrub ok
Dec 11 09:15:57 compute-1 systemd-logind[791]: New session 34 of user ceph-admin.
Dec 11 09:15:57 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 11 09:15:57 compute-1 systemd[1]: Starting User Manager for UID 42477...
Dec 11 09:15:57 compute-1 systemd[81771]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:15:57 compute-1 systemd[81771]: Queued start job for default target Main User Target.
Dec 11 09:15:57 compute-1 systemd[81771]: Created slice User Application Slice.
Dec 11 09:15:57 compute-1 systemd[81771]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 11 09:15:57 compute-1 systemd[81771]: Started Daily Cleanup of User's Temporary Directories.
Dec 11 09:15:57 compute-1 systemd[81771]: Reached target Paths.
Dec 11 09:15:57 compute-1 systemd[81771]: Reached target Timers.
Dec 11 09:15:57 compute-1 systemd[81771]: Starting D-Bus User Message Bus Socket...
Dec 11 09:15:57 compute-1 systemd[81771]: Starting Create User's Volatile Files and Directories...
Dec 11 09:15:57 compute-1 systemd[81771]: Listening on D-Bus User Message Bus Socket.
Dec 11 09:15:57 compute-1 systemd[81771]: Reached target Sockets.
Dec 11 09:15:57 compute-1 systemd[81771]: Finished Create User's Volatile Files and Directories.
Dec 11 09:15:57 compute-1 systemd[81771]: Reached target Basic System.
Dec 11 09:15:57 compute-1 systemd[81771]: Reached target Main User Target.
Dec 11 09:15:57 compute-1 systemd[81771]: Startup finished in 121ms.
Dec 11 09:15:57 compute-1 systemd[1]: Started User Manager for UID 42477.
Dec 11 09:15:57 compute-1 systemd[1]: Started Session 34 of User ceph-admin.
Dec 11 09:15:57 compute-1 sshd-session[81767]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:15:57 compute-1 sudo[81787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:57 compute-1 sudo[81787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:57 compute-1 sudo[81787]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:57 compute-1 sudo[81812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:15:57 compute-1 sudo[81812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e2 new map
Dec 11 09:15:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           btime 2025-12-11T09:15:57:915202+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:15:57.915143+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 11 09:15:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Dec 11 09:15:58 compute-1 ceph-mon[80018]: 5.1b scrub starts
Dec 11 09:15:58 compute-1 ceph-mon[80018]: 5.1b scrub ok
Dec 11 09:15:58 compute-1 ceph-mon[80018]: 2.15 scrub starts
Dec 11 09:15:58 compute-1 ceph-mon[80018]: 2.15 scrub ok
Dec 11 09:15:58 compute-1 ceph-mon[80018]: 3.19 deep-scrub starts
Dec 11 09:15:58 compute-1 ceph-mon[80018]: 3.19 deep-scrub ok
Dec 11 09:15:58 compute-1 ceph-mon[80018]: mgrmap e26: compute-0.wwpcae(active, since 1.06228s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:15:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec 11 09:15:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec 11 09:15:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec 11 09:15:58 compute-1 ceph-mon[80018]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 11 09:15:58 compute-1 ceph-mon[80018]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 11 09:15:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 11 09:15:58 compute-1 ceph-mon[80018]: osdmap e44: 3 total, 3 up, 3 in
Dec 11 09:15:58 compute-1 ceph-mon[80018]: fsmap cephfs:0
Dec 11 09:15:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:58 compute-1 podman[81908]: 2025-12-11 09:15:58.421013276 +0000 UTC m=+0.059488214 container exec 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 11 09:15:58 compute-1 podman[81908]: 2025-12-11 09:15:58.507936893 +0000 UTC m=+0.146411851 container exec_died 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:15:58 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 11 09:15:58 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 11 09:15:58 compute-1 sudo[81812]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:58 compute-1 sudo[81997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:58 compute-1 sudo[81997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:58 compute-1 sudo[81997]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:58 compute-1 sudo[82022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:15:58 compute-1 sudo[82022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:59 compute-1 ceph-mon[80018]: 6.19 deep-scrub starts
Dec 11 09:15:59 compute-1 ceph-mon[80018]: 6.19 deep-scrub ok
Dec 11 09:15:59 compute-1 ceph-mon[80018]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:15:59 compute-1 ceph-mon[80018]: 2.d scrub starts
Dec 11 09:15:59 compute-1 ceph-mon[80018]: 2.d scrub ok
Dec 11 09:15:59 compute-1 ceph-mon[80018]: 2.19 scrub starts
Dec 11 09:15:59 compute-1 ceph-mon[80018]: 2.19 scrub ok
Dec 11 09:15:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:15:59 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 11 09:15:59 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 11 09:15:59 compute-1 sudo[82022]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:59 compute-1 sudo[82078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:15:59 compute-1 sudo[82078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:15:59 compute-1 sudo[82078]: pam_unix(sudo:session): session closed for user root
Dec 11 09:15:59 compute-1 sudo[82103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:15:59 compute-1 sudo[82103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82103]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Dec 11 09:16:00 compute-1 ceph-mon[80018]: 5.1c scrub starts
Dec 11 09:16:00 compute-1 ceph-mon[80018]: 5.1c scrub ok
Dec 11 09:16:00 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:58] ENGINE Bus STARTING
Dec 11 09:16:00 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:58] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:16:00 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:58] ENGINE Client ('192.168.122.100', 45662) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:16:00 compute-1 ceph-mon[80018]: pgmap v5: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:00 compute-1 ceph-mon[80018]: from='client.14424 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:16:00 compute-1 ceph-mon[80018]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:00 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:58] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:16:00 compute-1 ceph-mon[80018]: [11/Dec/2025:09:15:58] ENGINE Bus STARTED
Dec 11 09:16:00 compute-1 ceph-mon[80018]: 2.a scrub starts
Dec 11 09:16:00 compute-1 ceph-mon[80018]: 2.a scrub ok
Dec 11 09:16:00 compute-1 ceph-mon[80018]: mgrmap e27: compute-0.wwpcae(active, since 2s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:00 compute-1 ceph-mon[80018]: 2.4 scrub starts
Dec 11 09:16:00 compute-1 ceph-mon[80018]: 2.4 scrub ok
Dec 11 09:16:00 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec 11 09:16:00 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:00 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:00 compute-1 sudo[82147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:16:00 compute-1 sudo[82147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82147]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 sudo[82172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:16:00 compute-1 sudo[82172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82172]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 sudo[82197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 11 09:16:00 compute-1 sudo[82197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82197]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 11 09:16:00 compute-1 sudo[82222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:00 compute-1 sudo[82222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82222]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:00 compute-1 sudo[82247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-1 sudo[82247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82247]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 sudo[82295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-1 sudo[82295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82295]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 sudo[82320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:16:00 compute-1 sudo[82320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82320]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:00 compute-1 sudo[82345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:16:00 compute-1 sudo[82345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:00 compute-1 sudo[82345]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:01 compute-1 sudo[82370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82370]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:01 compute-1 sudo[82395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82395]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-1 sudo[82420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82420]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:01 compute-1 sudo[82445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82445]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Dec 11 09:16:01 compute-1 sudo[82470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-1 sudo[82470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82470]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 5.18 scrub starts
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 5.18 scrub ok
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Adjusting osd_memory_target on compute-1 to 127.9M
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Unable to set osd_memory_target on compute-1 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 3.0 scrub starts
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 3.0 scrub ok
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec 11 09:16:01 compute-1 ceph-mon[80018]: osdmap e45: 3 total, 3 up, 3 in
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Adjusting osd_memory_target on compute-0 to 127.9M
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Unable to set osd_memory_target on compute-0 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:16:01 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 2.e scrub starts
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 2.e scrub ok
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 4.1a scrub starts
Dec 11 09:16:01 compute-1 ceph-mon[80018]: 4.1a scrub ok
Dec 11 09:16:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec 11 09:16:01 compute-1 ceph-mon[80018]: osdmap e46: 3 total, 3 up, 3 in
Dec 11 09:16:01 compute-1 sudo[82518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-1 sudo[82518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82518]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:16:01 compute-1 sudo[82543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82543]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 11 09:16:01 compute-1 sudo[82568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:01 compute-1 sudo[82568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82568]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 11 09:16:01 compute-1 sudo[82593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:16:01 compute-1 sudo[82593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82593]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:16:01 compute-1 sudo[82618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82618]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:01 compute-1 sudo[82643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82643]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:01 compute-1 sudo[82668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82668]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:01 compute-1 sudo[82693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82693]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:01 compute-1 sudo[82741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:01 compute-1 sudo[82741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:01 compute-1 sudo[82741]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 sudo[82766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-1 sudo[82766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82766]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 sudo[82791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-1 sudo[82791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82791]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 sudo[82816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:02 compute-1 sudo[82816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82816]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 sudo[82841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:16:02 compute-1 sudo[82841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82841]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 sudo[82866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-1 sudo[82866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82866]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Dec 11 09:16:02 compute-1 sudo[82891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:02 compute-1 sudo[82891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82891]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 ceph-mon[80018]: pgmap v7: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:16:02 compute-1 ceph-mon[80018]: 5.b deep-scrub starts
Dec 11 09:16:02 compute-1 ceph-mon[80018]: 5.b deep-scrub ok
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:02 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:02 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:16:02 compute-1 ceph-mon[80018]: mgrmap e28: compute-0.wwpcae(active, since 4s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:02 compute-1 ceph-mon[80018]: 4.18 scrub starts
Dec 11 09:16:02 compute-1 ceph-mon[80018]: 7.18 scrub starts
Dec 11 09:16:02 compute-1 ceph-mon[80018]: 4.18 scrub ok
Dec 11 09:16:02 compute-1 ceph-mon[80018]: 7.18 scrub ok
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:16:02 compute-1 ceph-mon[80018]: osdmap e47: 3 total, 3 up, 3 in
Dec 11 09:16:02 compute-1 sudo[82916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-1 sudo[82916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82916]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 sudo[82964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-1 sudo[82964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82964]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 sudo[82989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:16:02 compute-1 sudo[82989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[82989]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 11 09:16:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 11 09:16:02 compute-1 sudo[83014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:02 compute-1 sudo[83014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:02 compute-1 sudo[83014]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:03 compute-1 ceph-mon[80018]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:03 compute-1 ceph-mon[80018]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 2.c scrub starts
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 2.c scrub ok
Dec 11 09:16:03 compute-1 ceph-mon[80018]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 7.1b scrub starts
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 7.1b scrub ok
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 3.1c scrub starts
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 3.1c scrub ok
Dec 11 09:16:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 2.10 scrub starts
Dec 11 09:16:03 compute-1 ceph-mon[80018]: 2.10 scrub ok
Dec 11 09:16:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 11 09:16:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 11 09:16:04 compute-1 ceph-mon[80018]: pgmap v10: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:04 compute-1 ceph-mon[80018]: Deploying daemon node-exporter.compute-0 on compute-0
Dec 11 09:16:04 compute-1 ceph-mon[80018]: 7.1e deep-scrub starts
Dec 11 09:16:04 compute-1 ceph-mon[80018]: 7.1e deep-scrub ok
Dec 11 09:16:04 compute-1 ceph-mon[80018]: 4.1b scrub starts
Dec 11 09:16:04 compute-1 ceph-mon[80018]: 4.1b scrub ok
Dec 11 09:16:04 compute-1 ceph-mon[80018]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:16:04 compute-1 ceph-mon[80018]: mgrmap e29: compute-0.wwpcae(active, since 6s), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:16:04 compute-1 ceph-mon[80018]: 2.13 scrub starts
Dec 11 09:16:04 compute-1 ceph-mon[80018]: 2.13 scrub ok
Dec 11 09:16:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 11 09:16:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 11 09:16:05 compute-1 ceph-mon[80018]: 7.6 deep-scrub starts
Dec 11 09:16:05 compute-1 ceph-mon[80018]: 7.6 deep-scrub ok
Dec 11 09:16:05 compute-1 ceph-mon[80018]: 7.1c scrub starts
Dec 11 09:16:05 compute-1 ceph-mon[80018]: 7.1c scrub ok
Dec 11 09:16:05 compute-1 ceph-mon[80018]: 5.13 scrub starts
Dec 11 09:16:05 compute-1 ceph-mon[80018]: 5.13 scrub ok
Dec 11 09:16:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 11 09:16:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 11 09:16:05 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:06 compute-1 ceph-mon[80018]: pgmap v11: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec 11 09:16:06 compute-1 ceph-mon[80018]: 7.3 scrub starts
Dec 11 09:16:06 compute-1 ceph-mon[80018]: 7.3 scrub ok
Dec 11 09:16:06 compute-1 ceph-mon[80018]: 7.17 scrub starts
Dec 11 09:16:06 compute-1 ceph-mon[80018]: 7.17 scrub ok
Dec 11 09:16:06 compute-1 ceph-mon[80018]: 5.8 scrub starts
Dec 11 09:16:06 compute-1 ceph-mon[80018]: 5.8 scrub ok
Dec 11 09:16:06 compute-1 sudo[83039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:06 compute-1 sudo[83039]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:06 compute-1 sudo[83039]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 11 09:16:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 11 09:16:06 compute-1 sudo[83064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:06 compute-1 sudo[83064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:06 compute-1 systemd[1]: Reloading.
Dec 11 09:16:07 compute-1 systemd-rc-local-generator[83156]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:07 compute-1 systemd-sysv-generator[83159]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:07 compute-1 systemd[1]: Reloading.
Dec 11 09:16:07 compute-1 systemd-rc-local-generator[83191]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:07 compute-1 systemd-sysv-generator[83196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:07 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:07 compute-1 ceph-mon[80018]: 7.2 scrub starts
Dec 11 09:16:07 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:07 compute-1 ceph-mon[80018]: 7.2 scrub ok
Dec 11 09:16:07 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:07 compute-1 ceph-mon[80018]: Deploying daemon node-exporter.compute-1 on compute-1
Dec 11 09:16:07 compute-1 ceph-mon[80018]: 7.12 scrub starts
Dec 11 09:16:07 compute-1 ceph-mon[80018]: 7.12 scrub ok
Dec 11 09:16:07 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 11 09:16:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 11 09:16:07 compute-1 bash[83254]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec 11 09:16:08 compute-1 bash[83254]: Getting image source signatures
Dec 11 09:16:08 compute-1 bash[83254]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec 11 09:16:08 compute-1 bash[83254]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec 11 09:16:08 compute-1 bash[83254]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec 11 09:16:08 compute-1 ceph-mon[80018]: pgmap v12: 194 pgs: 194 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 11 op/s
Dec 11 09:16:08 compute-1 ceph-mon[80018]: 4.14 scrub starts
Dec 11 09:16:08 compute-1 ceph-mon[80018]: 4.14 scrub ok
Dec 11 09:16:08 compute-1 ceph-mon[80018]: 7.4 scrub starts
Dec 11 09:16:08 compute-1 ceph-mon[80018]: 7.4 scrub ok
Dec 11 09:16:08 compute-1 ceph-mon[80018]: 7.15 scrub starts
Dec 11 09:16:08 compute-1 ceph-mon[80018]: 7.15 scrub ok
Dec 11 09:16:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 11 09:16:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 11 09:16:08 compute-1 bash[83254]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec 11 09:16:08 compute-1 bash[83254]: Writing manifest to image destination
Dec 11 09:16:08 compute-1 podman[83254]: 2025-12-11 09:16:08.669930707 +0000 UTC m=+1.020519689 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec 11 09:16:08 compute-1 podman[83254]: 2025-12-11 09:16:08.690044039 +0000 UTC m=+1.040632991 container create 851fbde2323ef3f3948778a526539f2fee5bda7b4a505af3c7d6d952672edf2a (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:16:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ade3258d85025ff48a9962fe763af454b3c623034227b8ca97d11b0281089806/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:08 compute-1 podman[83254]: 2025-12-11 09:16:08.748307309 +0000 UTC m=+1.098896281 container init 851fbde2323ef3f3948778a526539f2fee5bda7b4a505af3c7d6d952672edf2a (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:16:08 compute-1 podman[83254]: 2025-12-11 09:16:08.754709555 +0000 UTC m=+1.105298507 container start 851fbde2323ef3f3948778a526539f2fee5bda7b4a505af3c7d6d952672edf2a (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.764Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.764Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.766Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.766Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.766Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.766Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.766Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.766Z caller=node_exporter.go:117 level=info collector=arp
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=bcache
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=bonding
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=cpu
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=dmi
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=edac
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=entropy
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=filefd
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=hwmon
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=netclass
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=netdev
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=netstat
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=nfs
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=nvme
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=os
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=pressure
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=rapl
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=selinux
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=softnet
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=stat
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=textfile
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=time
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 11 09:16:08 compute-1 bash[83254]: 851fbde2323ef3f3948778a526539f2fee5bda7b4a505af3c7d6d952672edf2a
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=uname
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=xfs
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.767Z caller=node_exporter.go:117 level=info collector=zfs
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.768Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec 11 09:16:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1[83330]: ts=2025-12-11T09:16:08.768Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 11 09:16:08 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:08 compute-1 sudo[83064]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Dec 11 09:16:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Dec 11 09:16:09 compute-1 ceph-mon[80018]: 2.1b scrub starts
Dec 11 09:16:09 compute-1 ceph-mon[80018]: 2.1b scrub ok
Dec 11 09:16:09 compute-1 ceph-mon[80018]: 7.e scrub starts
Dec 11 09:16:09 compute-1 ceph-mon[80018]: 7.e scrub ok
Dec 11 09:16:09 compute-1 ceph-mon[80018]: 7.0 scrub starts
Dec 11 09:16:09 compute-1 ceph-mon[80018]: 7.0 scrub ok
Dec 11 09:16:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:09 compute-1 ceph-mon[80018]: Deploying daemon node-exporter.compute-2 on compute-2
Dec 11 09:16:09 compute-1 ceph-mon[80018]: pgmap v13: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Dec 11 09:16:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 11 09:16:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 11 09:16:10 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:11 compute-1 ceph-mon[80018]: 5.d scrub starts
Dec 11 09:16:11 compute-1 ceph-mon[80018]: 7.f scrub starts
Dec 11 09:16:11 compute-1 ceph-mon[80018]: 5.d scrub ok
Dec 11 09:16:11 compute-1 ceph-mon[80018]: 7.f scrub ok
Dec 11 09:16:11 compute-1 ceph-mon[80018]: 7.1 deep-scrub starts
Dec 11 09:16:11 compute-1 ceph-mon[80018]: 7.1 deep-scrub ok
Dec 11 09:16:11 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 11 09:16:11 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.a scrub starts
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.a scrub ok
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.8 scrub starts
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.8 scrub ok
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.7 scrub starts
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.7 scrub ok
Dec 11 09:16:12 compute-1 ceph-mon[80018]: pgmap v14: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.1d deep-scrub starts
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.1d deep-scrub ok
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.9 scrub starts
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 7.9 scrub ok
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 6.3 scrub starts
Dec 11 09:16:12 compute-1 ceph-mon[80018]: 6.3 scrub ok
Dec 11 09:16:12 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 11 09:16:12 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 11 09:16:13 compute-1 ceph-mon[80018]: 7.b scrub starts
Dec 11 09:16:13 compute-1 ceph-mon[80018]: 7.b scrub ok
Dec 11 09:16:13 compute-1 ceph-mon[80018]: 7.d scrub starts
Dec 11 09:16:13 compute-1 ceph-mon[80018]: 7.d scrub ok
Dec 11 09:16:13 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 11 09:16:13 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 11 09:16:14 compute-1 ceph-mon[80018]: pgmap v15: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 6 op/s
Dec 11 09:16:14 compute-1 ceph-mon[80018]: 7.10 scrub starts
Dec 11 09:16:14 compute-1 ceph-mon[80018]: 7.10 scrub ok
Dec 11 09:16:14 compute-1 ceph-mon[80018]: 7.c scrub starts
Dec 11 09:16:14 compute-1 ceph-mon[80018]: 7.c scrub ok
Dec 11 09:16:14 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 11 09:16:14 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 11 09:16:15 compute-1 ceph-mon[80018]: 7.13 scrub starts
Dec 11 09:16:15 compute-1 ceph-mon[80018]: 7.13 scrub ok
Dec 11 09:16:15 compute-1 ceph-mon[80018]: 7.19 scrub starts
Dec 11 09:16:15 compute-1 ceph-mon[80018]: 7.19 scrub ok
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:16:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:15 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 11 09:16:15 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 11 09:16:15 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:16 compute-1 ceph-mon[80018]: pgmap v16: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 6 op/s
Dec 11 09:16:16 compute-1 ceph-mon[80018]: 6.1a scrub starts
Dec 11 09:16:16 compute-1 ceph-mon[80018]: 6.1a scrub ok
Dec 11 09:16:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 11 09:16:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 11 09:16:18 compute-1 ceph-mon[80018]: 7.1a scrub starts
Dec 11 09:16:18 compute-1 ceph-mon[80018]: 7.1a scrub ok
Dec 11 09:16:18 compute-1 ceph-mon[80018]: pgmap v17: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:18 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:20 compute-1 ceph-mon[80018]: pgmap v18: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:20 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aenhnr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.aenhnr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:22 compute-1 ceph-mon[80018]: Deploying daemon rgw.rgw.compute-2.aenhnr on compute-2
Dec 11 09:16:22 compute-1 ceph-mon[80018]: pgmap v19: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:22 compute-1 sudo[83339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:22 compute-1 sudo[83339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:22 compute-1 sudo[83339]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:22 compute-1 sudo[83364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:22 compute-1 sudo[83364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:23 compute-1 podman[83431]: 2025-12-11 09:16:23.364953291 +0000 UTC m=+0.022560890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:23 compute-1 podman[83431]: 2025-12-11 09:16:23.662829612 +0000 UTC m=+0.320437201 container create ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_zhukovsky, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 11 09:16:23 compute-1 systemd[1]: Started libpod-conmon-ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd.scope.
Dec 11 09:16:23 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:16:23 compute-1 podman[83431]: 2025-12-11 09:16:23.75962461 +0000 UTC m=+0.417232249 container init ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec 11 09:16:23 compute-1 podman[83431]: 2025-12-11 09:16:23.767174237 +0000 UTC m=+0.424781816 container start ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:23 compute-1 thirsty_zhukovsky[83448]: 167 167
Dec 11 09:16:23 compute-1 systemd[1]: libpod-ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd.scope: Deactivated successfully.
Dec 11 09:16:23 compute-1 conmon[83448]: conmon ef05c94b1300689212b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd.scope/container/memory.events
Dec 11 09:16:23 compute-1 podman[83431]: 2025-12-11 09:16:23.780092562 +0000 UTC m=+0.437700161 container attach ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:23 compute-1 podman[83431]: 2025-12-11 09:16:23.780473023 +0000 UTC m=+0.438080602 container died ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_zhukovsky, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 11 09:16:23 compute-1 systemd[1]: var-lib-containers-storage-overlay-1115007a3366d689d712edc18e26ae002ba65ab2711085f3916cd60273d11f5f-merged.mount: Deactivated successfully.
Dec 11 09:16:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.hnfveq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.hnfveq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:23 compute-1 ceph-mon[80018]: Deploying daemon rgw.rgw.compute-1.hnfveq on compute-1
Dec 11 09:16:23 compute-1 ceph-mon[80018]: pgmap v20: 194 pgs: 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:23 compute-1 podman[83431]: 2025-12-11 09:16:23.858162547 +0000 UTC m=+0.515770126 container remove ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 11 09:16:23 compute-1 systemd[1]: libpod-conmon-ef05c94b1300689212b6f8908b7497ea6dd6414db1ea814de27c0152ded7a7dd.scope: Deactivated successfully.
Dec 11 09:16:23 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Dec 11 09:16:23 compute-1 systemd[1]: Reloading.
Dec 11 09:16:23 compute-1 systemd-rc-local-generator[83490]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:23 compute-1 systemd-sysv-generator[83497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:24 compute-1 systemd[1]: Reloading.
Dec 11 09:16:24 compute-1 systemd-rc-local-generator[83535]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:24 compute-1 systemd-sysv-generator[83540]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:24 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.hnfveq for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:24 compute-1 podman[83592]: 2025-12-11 09:16:24.703794321 +0000 UTC m=+0.090397824 container create 3f284bd32079f4714d86974d1710e0102411bd59389abf2e50ad280d6227d862 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-1-hnfveq, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec 11 09:16:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe56162a2f5473a5eee38048997cee27d6e2c8673341706af88f318d57a6829/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe56162a2f5473a5eee38048997cee27d6e2c8673341706af88f318d57a6829/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe56162a2f5473a5eee38048997cee27d6e2c8673341706af88f318d57a6829/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:24 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fe56162a2f5473a5eee38048997cee27d6e2c8673341706af88f318d57a6829/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.hnfveq supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:24 compute-1 podman[83592]: 2025-12-11 09:16:24.760921589 +0000 UTC m=+0.147525112 container init 3f284bd32079f4714d86974d1710e0102411bd59389abf2e50ad280d6227d862 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-1-hnfveq, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Dec 11 09:16:24 compute-1 podman[83592]: 2025-12-11 09:16:24.765613298 +0000 UTC m=+0.152216801 container start 3f284bd32079f4714d86974d1710e0102411bd59389abf2e50ad280d6227d862 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-1-hnfveq, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Dec 11 09:16:24 compute-1 bash[83592]: 3f284bd32079f4714d86974d1710e0102411bd59389abf2e50ad280d6227d862
Dec 11 09:16:24 compute-1 podman[83592]: 2025-12-11 09:16:24.68596065 +0000 UTC m=+0.072564173 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:24 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.hnfveq for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:24 compute-1 sudo[83364]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:24 compute-1 radosgw[83611]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:16:24 compute-1 radosgw[83611]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec 11 09:16:24 compute-1 radosgw[83611]: framework: beast
Dec 11 09:16:24 compute-1 radosgw[83611]: framework conf key: endpoint, val: 192.168.122.101:8082
Dec 11 09:16:24 compute-1 radosgw[83611]: init_numa not setting numa affinity
Dec 11 09:16:24 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Dec 11 09:16:24 compute-1 ceph-mon[80018]: osdmap e48: 3 total, 3 up, 3 in
Dec 11 09:16:24 compute-1 ceph-mon[80018]: from='client.? 192.168.122.102:0/1385895065' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 11 09:16:24 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec 11 09:16:25 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:25 compute-1 ceph-mon[80018]: pgmap v22: 195 pgs: 1 unknown, 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 11 09:16:25 compute-1 ceph-mon[80018]: osdmap e49: 3 total, 3 up, 3 in
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-1 ceph-mon[80018]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dblyhr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dblyhr", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:25 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:25 compute-1 ceph-mon[80018]: Deploying daemon rgw.rgw.compute-0.dblyhr on compute-0
Dec 11 09:16:25 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Dec 11 09:16:25 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 11 09:16:25 compute-1 ceph-mon[80018]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 50 pg[10.0( empty local-lis/les=0/0 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [0] r=0 lpr=50 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:16:26 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Dec 11 09:16:26 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 51 pg[10.0( empty local-lis/les=50/51 n=0 ec=50/50 lis/c=0/0 les/c/f=0/0/0 sis=50) [0] r=0 lpr=50 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:16:26 compute-1 ceph-mon[80018]: osdmap e50: 3 total, 3 up, 3 in
Dec 11 09:16:26 compute-1 ceph-mon[80018]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-1 ceph-mon[80018]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:26 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec 11 09:16:28 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Dec 11 09:16:28 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 11 09:16:28 compute-1 ceph-mon[80018]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:28 compute-1 ceph-mon[80018]: pgmap v25: 196 pgs: 2 unknown, 194 active+clean; 449 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 11 09:16:28 compute-1 ceph-mon[80018]: osdmap e51: 3 total, 3 up, 3 in
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-1 ceph-mon[80018]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.abebdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.abebdg", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:28 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:28 compute-1 ceph-mon[80018]: Deploying daemon mds.cephfs.compute-2.abebdg on compute-2
Dec 11 09:16:29 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Dec 11 09:16:29 compute-1 ceph-mon[80018]: osdmap e52: 3 total, 3 up, 3 in
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 11 09:16:29 compute-1 ceph-mon[80018]: osdmap e53: 3 total, 3 up, 3 in
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Dec 11 09:16:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 54 pg[12.0( empty local-lis/les=0/0 n=0 ec=54/54 lis/c=0/0 les/c/f=0/0/0 sis=54) [0] r=0 lpr=54 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 11 09:16:30 compute-1 ceph-mon[80018]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e3 new map
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           btime 2025-12-11T09:16:30:087627+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:15:57.915143+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.abebdg{-1:24187} state up:standby seq 1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e4 new map
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           btime 2025-12-11T09:16:30:098944+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:30.098928+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:creating seq 1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 11 09:16:30 compute-1 ceph-mon[80018]: pgmap v28: 197 pgs: 1 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ejykhm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ejykhm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: Deploying daemon mds.cephfs.compute-0.ejykhm on compute-0
Dec 11 09:16:30 compute-1 ceph-mon[80018]: osdmap e54: 3 total, 3 up, 3 in
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:boot
Dec 11 09:16:30 compute-1 ceph-mon[80018]: daemon mds.cephfs.compute-2.abebdg assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 11 09:16:30 compute-1 ceph-mon[80018]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 11 09:16:30 compute-1 ceph-mon[80018]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 11 09:16:30 compute-1 ceph-mon[80018]: fsmap cephfs:0 1 up:standby
Dec 11 09:16:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"}]: dispatch
Dec 11 09:16:30 compute-1 ceph-mon[80018]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:creating}
Dec 11 09:16:30 compute-1 ceph-mon[80018]: daemon mds.cephfs.compute-2.abebdg is now active in filesystem cephfs as rank 0
Dec 11 09:16:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Dec 11 09:16:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 11 09:16:31 compute-1 ceph-mon[80018]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:31 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 55 pg[12.0( empty local-lis/les=54/55 n=0 ec=54/54 lis/c=0/0 les/c/f=0/0/0 sis=54) [0] r=0 lpr=54 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:16:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e5 new map
Dec 11 09:16:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           btime 2025-12-11T09:16:31:110917+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:31.110914+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e6 new map
Dec 11 09:16:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           btime 2025-12-11T09:16:31:130086+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:31.110914+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 2 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:31 compute-1 sudo[84199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:31 compute-1 sudo[84199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:31 compute-1 sudo[84199]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:31 compute-1 sudo[84224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:31 compute-1 sudo[84224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:31 compute-1 podman[84291]: 2025-12-11 09:16:31.705667425 +0000 UTC m=+0.044862743 container create 6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goodall, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:31 compute-1 systemd[1]: Started libpod-conmon-6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee.scope.
Dec 11 09:16:31 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:16:31 compute-1 podman[84291]: 2025-12-11 09:16:31.688603196 +0000 UTC m=+0.027798524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:31 compute-1 podman[84291]: 2025-12-11 09:16:31.791615276 +0000 UTC m=+0.130810604 container init 6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goodall, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec 11 09:16:31 compute-1 podman[84291]: 2025-12-11 09:16:31.801403674 +0000 UTC m=+0.140598992 container start 6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 11 09:16:31 compute-1 podman[84291]: 2025-12-11 09:16:31.805721833 +0000 UTC m=+0.144917201 container attach 6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goodall, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 11 09:16:31 compute-1 admiring_goodall[84308]: 167 167
Dec 11 09:16:31 compute-1 systemd[1]: libpod-6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee.scope: Deactivated successfully.
Dec 11 09:16:31 compute-1 conmon[84308]: conmon 6643cdaf7685339fad33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee.scope/container/memory.events
Dec 11 09:16:31 compute-1 podman[84291]: 2025-12-11 09:16:31.807334828 +0000 UTC m=+0.146530156 container died 6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goodall, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:31 compute-1 systemd[1]: var-lib-containers-storage-overlay-0a6c1123117570e39d2912bb067451aec409c5a5d244c7b679b7711028313d45-merged.mount: Deactivated successfully.
Dec 11 09:16:31 compute-1 podman[84291]: 2025-12-11 09:16:31.84384188 +0000 UTC m=+0.183037188 container remove 6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_goodall, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:16:31 compute-1 systemd[1]: libpod-conmon-6643cdaf7685339fad33e8c33da27f462fd5ea2b435bc999bd2db674c8f88aee.scope: Deactivated successfully.
Dec 11 09:16:31 compute-1 systemd[1]: Reloading.
Dec 11 09:16:31 compute-1 systemd-rc-local-generator[84349]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:31 compute-1 systemd-sysv-generator[84352]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:32 compute-1 ceph-mon[80018]: pgmap v31: 198 pgs: 2 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 2.0 KiB/s wr, 11 op/s
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 11 09:16:32 compute-1 ceph-mon[80018]: osdmap e55: 3 total, 3 up, 3 in
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? 192.168.122.102:0/487353682' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? 192.168.122.101:0/3289101500' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:32 compute-1 ceph-mon[80018]: mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:active
Dec 11 09:16:32 compute-1 ceph-mon[80018]: mds.? [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] up:boot
Dec 11 09:16:32 compute-1 ceph-mon[80018]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 1 up:standby
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:32 compute-1 ceph-mon[80018]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 1 up:standby
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hifxsh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.hifxsh", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 11 09:16:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:32 compute-1 ceph-mon[80018]: Deploying daemon mds.cephfs.compute-1.hifxsh on compute-1
Dec 11 09:16:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Dec 11 09:16:32 compute-1 systemd[1]: Reloading.
Dec 11 09:16:32 compute-1 systemd-rc-local-generator[84394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:32 compute-1 systemd-sysv-generator[84398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:32 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.hifxsh for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:32 compute-1 radosgw[83611]: v1 topic migration: starting v1 topic migration..
Dec 11 09:16:32 compute-1 radosgw[83611]: LDAP not started since no server URIs were provided in the configuration.
Dec 11 09:16:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-rgw-rgw-compute-1-hnfveq[83607]: 2025-12-11T09:16:32.532+0000 7f98503fb980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec 11 09:16:32 compute-1 radosgw[83611]: v1 topic migration: finished v1 topic migration
Dec 11 09:16:32 compute-1 radosgw[83611]: framework: beast
Dec 11 09:16:32 compute-1 radosgw[83611]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 11 09:16:32 compute-1 radosgw[83611]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 11 09:16:32 compute-1 radosgw[83611]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-1 radosgw[83611]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Dec 11 09:16:32 compute-1 radosgw[83611]: starting handler: beast
Dec 11 09:16:32 compute-1 radosgw[83611]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:16:32 compute-1 radosgw[83611]: mgrc service_daemon_register rgw.24176 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.hnfveq,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=3b269e18-0f77-4f14-9aee-ab88040a4f16,zone_name=default,zonegroup_id=dab2b214-8f75-4660-ba0c-2a653c230bd3,zonegroup_name=default}
Dec 11 09:16:32 compute-1 podman[84485]: 2025-12-11 09:16:32.953771272 +0000 UTC m=+0.048949815 container create 455df41a7fde79eb4513ba50d6c7a02dedfedabe13316cd29b36ece9991dc204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-1-hifxsh, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 11 09:16:33 compute-1 podman[84485]: 2025-12-11 09:16:32.93331383 +0000 UTC m=+0.028492393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd4e60b38905acb98f7a7409ddf7ea564fd7e3fa8928fcc395b734d22ff0aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd4e60b38905acb98f7a7409ddf7ea564fd7e3fa8928fcc395b734d22ff0aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd4e60b38905acb98f7a7409ddf7ea564fd7e3fa8928fcc395b734d22ff0aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8edd4e60b38905acb98f7a7409ddf7ea564fd7e3fa8928fcc395b734d22ff0aa/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.hifxsh supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:33 compute-1 podman[84485]: 2025-12-11 09:16:33.06984941 +0000 UTC m=+0.165028063 container init 455df41a7fde79eb4513ba50d6c7a02dedfedabe13316cd29b36ece9991dc204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-1-hifxsh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 11 09:16:33 compute-1 podman[84485]: 2025-12-11 09:16:33.078343363 +0000 UTC m=+0.173521916 container start 455df41a7fde79eb4513ba50d6c7a02dedfedabe13316cd29b36ece9991dc204 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-1-hifxsh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 11 09:16:33 compute-1 bash[84485]: 455df41a7fde79eb4513ba50d6c7a02dedfedabe13316cd29b36ece9991dc204
Dec 11 09:16:33 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.hifxsh for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:33 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-2.aenhnr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:33 compute-1 ceph-mon[80018]: from='client.? ' entity='client.rgw.rgw.compute-1.hnfveq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:33 compute-1 ceph-mon[80018]: from='client.? 192.168.122.100:0/4067601783' entity='client.rgw.rgw.compute-0.dblyhr' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 11 09:16:33 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:33 compute-1 ceph-mon[80018]: osdmap e56: 3 total, 3 up, 3 in
Dec 11 09:16:33 compute-1 sudo[84224]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:33 compute-1 ceph-mds[84504]: set uid:gid to 167:167 (ceph:ceph)
Dec 11 09:16:33 compute-1 ceph-mds[84504]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec 11 09:16:33 compute-1 ceph-mds[84504]: main not setting numa affinity
Dec 11 09:16:33 compute-1 ceph-mds[84504]: pidfile_write: ignore empty --pid-file
Dec 11 09:16:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mds-cephfs-compute-1-hifxsh[84500]: starting mds.cephfs.compute-1.hifxsh at 
Dec 11 09:16:33 compute-1 ceph-mds[84504]: mds.cephfs.compute-1.hifxsh Updating MDS map to version 6 from mon.2
Dec 11 09:16:33 compute-1 sudo[84523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:33 compute-1 sudo[84523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:33 compute-1 sudo[84523]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:33 compute-1 sudo[84548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:33 compute-1 sudo[84548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:34 compute-1 ceph-mon[80018]: pgmap v34: 198 pgs: 2 unknown, 196 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Cluster is now healthy
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Creating key for client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.vlrwzy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Bind address in nfs.cephfs.0.0.compute-1.vlrwzy's ganesha conf is defaulting to empty
Dec 11 09:16:34 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:34 compute-1 ceph-mon[80018]: Deploying daemon nfs.cephfs.0.0.compute-1.vlrwzy on compute-1
Dec 11 09:16:34 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e7 new map
Dec 11 09:16:34 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           btime 2025-12-11T09:16:34:169469+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:34.169018+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hifxsh{-1:24191} state up:standby seq 1 addr [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:34 compute-1 ceph-mds[84504]: mds.cephfs.compute-1.hifxsh Updating MDS map to version 7 from mon.2
Dec 11 09:16:34 compute-1 ceph-mds[84504]: mds.cephfs.compute-1.hifxsh Monitors have assigned me to become a standby
Dec 11 09:16:34 compute-1 podman[84614]: 2025-12-11 09:16:34.374962843 +0000 UTC m=+0.044498643 container create 3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec 11 09:16:34 compute-1 systemd[1]: Started libpod-conmon-3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55.scope.
Dec 11 09:16:34 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:16:34 compute-1 podman[84614]: 2025-12-11 09:16:34.352967199 +0000 UTC m=+0.022503009 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:34 compute-1 podman[84614]: 2025-12-11 09:16:34.477383956 +0000 UTC m=+0.146919766 container init 3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:16:34 compute-1 podman[84614]: 2025-12-11 09:16:34.484245364 +0000 UTC m=+0.153781154 container start 3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:16:34 compute-1 podman[84614]: 2025-12-11 09:16:34.488355807 +0000 UTC m=+0.157891597 container attach 3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_matsumoto, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:34 compute-1 eager_matsumoto[84630]: 167 167
Dec 11 09:16:34 compute-1 systemd[1]: libpod-3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55.scope: Deactivated successfully.
Dec 11 09:16:34 compute-1 podman[84614]: 2025-12-11 09:16:34.491949496 +0000 UTC m=+0.161485296 container died 3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_matsumoto, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:34 compute-1 systemd[1]: var-lib-containers-storage-overlay-345a06156516f70f42d91ca994422aefc7ce9df5b95829abc81bc4024295b309-merged.mount: Deactivated successfully.
Dec 11 09:16:34 compute-1 podman[84614]: 2025-12-11 09:16:34.526221357 +0000 UTC m=+0.195757147 container remove 3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_matsumoto, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 11 09:16:34 compute-1 systemd[1]: libpod-conmon-3cc08310e72c12961e448889ce3315d30eaa90ef8477559fecb8525b4a131d55.scope: Deactivated successfully.
Dec 11 09:16:34 compute-1 systemd[1]: Reloading.
Dec 11 09:16:34 compute-1 systemd-rc-local-generator[84676]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:34 compute-1 systemd-sysv-generator[84679]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:34 compute-1 systemd[1]: Reloading.
Dec 11 09:16:34 compute-1 systemd-rc-local-generator[84713]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:34 compute-1 systemd-sysv-generator[84717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:35 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vlrwzy for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:35 compute-1 ceph-mon[80018]: mds.? [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] up:boot
Dec 11 09:16:35 compute-1 ceph-mon[80018]: mds.? [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] up:active
Dec 11 09:16:35 compute-1 ceph-mon[80018]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:35 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"}]: dispatch
Dec 11 09:16:35 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e8 new map
Dec 11 09:16:35 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           btime 2025-12-11T09:16:35:218729+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:34.169018+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hifxsh{-1:24191} state up:standby seq 1 addr [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:35 compute-1 podman[84769]: 2025-12-11 09:16:35.478933742 +0000 UTC m=+0.060623726 container create 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:16:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b3b4e2ab8984559e54c49073d5e3591dada623a1dfeaf1ecb09f7be42ff0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:35 compute-1 podman[84769]: 2025-12-11 09:16:35.460767713 +0000 UTC m=+0.042457657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:16:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b3b4e2ab8984559e54c49073d5e3591dada623a1dfeaf1ecb09f7be42ff0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b3b4e2ab8984559e54c49073d5e3591dada623a1dfeaf1ecb09f7be42ff0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3445b3b4e2ab8984559e54c49073d5e3591dada623a1dfeaf1ecb09f7be42ff0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vlrwzy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:35 compute-1 podman[84769]: 2025-12-11 09:16:35.580937574 +0000 UTC m=+0.162627608 container init 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:16:35 compute-1 podman[84769]: 2025-12-11 09:16:35.586172806 +0000 UTC m=+0.167862790 container start 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec 11 09:16:35 compute-1 bash[84769]: 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c
Dec 11 09:16:35 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vlrwzy for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 11 09:16:35 compute-1 sudo[84548]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 11 09:16:35 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:16:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:36 compute-1 ceph-mon[80018]: pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 8.2 KiB/s wr, 166 op/s
Dec 11 09:16:36 compute-1 ceph-mon[80018]: mds.? [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] up:standby
Dec 11 09:16:36 compute-1 ceph-mon[80018]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:36 compute-1 ceph-mon[80018]: Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:37 compute-1 ceph-mon[80018]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec 11 09:16:37 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:38 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e9 new map
Dec 11 09:16:38 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           btime 2025-12-11T09:16:38:106806+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        7
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-11T09:15:57.915143+0000
                                           modified        2025-12-11T09:16:34.169018+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24187}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24187 members: 24187
                                           [mds.cephfs.compute-2.abebdg{0:24187} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1341793648,v1:192.168.122.102:6805/1341793648] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.ejykhm{-1:14481} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2227004311,v1:192.168.122.100:6807/2227004311] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.hifxsh{-1:24191} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] compat {c=[1],r=[1],i=[1fff]}]
Dec 11 09:16:38 compute-1 ceph-mds[84504]: mds.cephfs.compute-1.hifxsh Updating MDS map to version 9 from mon.2
Dec 11 09:16:38 compute-1 ceph-mon[80018]: pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 7.0 KiB/s wr, 142 op/s
Dec 11 09:16:38 compute-1 ceph-mon[80018]: mds.? [v2:192.168.122.101:6804/3436560509,v1:192.168.122.101:6805/3436560509] up:standby
Dec 11 09:16:38 compute-1 ceph-mon[80018]: fsmap cephfs:1 {0=cephfs.compute-2.abebdg=up:active} 2 up:standby
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 11 09:16:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:16:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ydhhov-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:40 compute-1 ceph-mon[80018]: Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:40 compute-1 ceph-mon[80018]: Creating key for client.nfs.cephfs.1.0.compute-2.ydhhov-rgw
Dec 11 09:16:40 compute-1 ceph-mon[80018]: Bind address in nfs.cephfs.1.0.compute-2.ydhhov's ganesha conf is defaulting to empty
Dec 11 09:16:40 compute-1 ceph-mon[80018]: Deploying daemon nfs.cephfs.1.0.compute-2.ydhhov on compute-2
Dec 11 09:16:40 compute-1 ceph-mon[80018]: pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 7.1 KiB/s wr, 345 op/s
Dec 11 09:16:40 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:40 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:16:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:40 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:16:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:40 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:41 compute-1 ceph-mon[80018]: pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 151 KiB/s rd, 5.8 KiB/s wr, 280 op/s
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:41 compute-1 ceph-mon[80018]: Creating key for client.nfs.cephfs.2.0.compute-0.iryjby
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec 11 09:16:41 compute-1 ceph-mon[80018]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec 11 09:16:41 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:43 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:44 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:16:44 compute-1 ceph-mon[80018]: pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 5.3 KiB/s wr, 255 op/s
Dec 11 09:16:44 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec 11 09:16:44 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec 11 09:16:44 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec 11 09:16:44 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.iryjby-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 11 09:16:44 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:16:45 compute-1 ceph-mon[80018]: Rados config object exists: conf-nfs.cephfs
Dec 11 09:16:45 compute-1 ceph-mon[80018]: Creating key for client.nfs.cephfs.2.0.compute-0.iryjby-rgw
Dec 11 09:16:45 compute-1 ceph-mon[80018]: Bind address in nfs.cephfs.2.0.compute-0.iryjby's ganesha conf is defaulting to empty
Dec 11 09:16:45 compute-1 ceph-mon[80018]: Deploying daemon nfs.cephfs.2.0.compute-0.iryjby on compute-0
Dec 11 09:16:45 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:46 compute-1 sudo[84837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:16:46 compute-1 sudo[84837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:46 compute-1 sudo[84837]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:46 compute-1 ceph-mon[80018]: pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 5.5 KiB/s wr, 232 op/s
Dec 11 09:16:46 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:46 compute-1 sudo[84862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:16:46 compute-1 sudo[84862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:16:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:16:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:16:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:16:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:16:47 compute-1 ceph-mon[80018]: Deploying daemon haproxy.nfs.cephfs.compute-1.aifiay on compute-1
Dec 11 09:16:47 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:48 compute-1 ceph-mon[80018]: pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 1.5 KiB/s wr, 151 op/s
Dec 11 09:16:49 compute-1 podman[84927]: 2025-12-11 09:16:49.832970452 +0000 UTC m=+3.078380285 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 11 09:16:49 compute-1 podman[84927]: 2025-12-11 09:16:49.920861355 +0000 UTC m=+3.166271168 container create f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a (image=quay.io/ceph/haproxy:2.3, name=lucid_murdock)
Dec 11 09:16:49 compute-1 systemd[1]: Started libpod-conmon-f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a.scope.
Dec 11 09:16:49 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:16:50 compute-1 podman[84927]: 2025-12-11 09:16:50.000534824 +0000 UTC m=+3.245944637 container init f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a (image=quay.io/ceph/haproxy:2.3, name=lucid_murdock)
Dec 11 09:16:50 compute-1 podman[84927]: 2025-12-11 09:16:50.010034574 +0000 UTC m=+3.255444397 container start f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a (image=quay.io/ceph/haproxy:2.3, name=lucid_murdock)
Dec 11 09:16:50 compute-1 podman[84927]: 2025-12-11 09:16:50.01386829 +0000 UTC m=+3.259278203 container attach f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a (image=quay.io/ceph/haproxy:2.3, name=lucid_murdock)
Dec 11 09:16:50 compute-1 lucid_murdock[85042]: 0 0
Dec 11 09:16:50 compute-1 systemd[1]: libpod-f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a.scope: Deactivated successfully.
Dec 11 09:16:50 compute-1 podman[84927]: 2025-12-11 09:16:50.020475321 +0000 UTC m=+3.265885134 container died f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a (image=quay.io/ceph/haproxy:2.3, name=lucid_murdock)
Dec 11 09:16:50 compute-1 systemd[1]: var-lib-containers-storage-overlay-6972784be9243bf9acb145a3b2fb3f1147059441ba8209c3b904a93da85df0bf-merged.mount: Deactivated successfully.
Dec 11 09:16:50 compute-1 podman[84927]: 2025-12-11 09:16:50.179955751 +0000 UTC m=+3.425365564 container remove f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a (image=quay.io/ceph/haproxy:2.3, name=lucid_murdock)
Dec 11 09:16:50 compute-1 systemd[1]: libpod-conmon-f03dc180609f6bcb2005fedc7873cc686934911f042424cbad581cabb2079a4a.scope: Deactivated successfully.
Dec 11 09:16:50 compute-1 systemd[1]: Reloading.
Dec 11 09:16:50 compute-1 systemd-rc-local-generator[85086]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:50 compute-1 systemd-sysv-generator[85095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:50 compute-1 systemd[1]: Reloading.
Dec 11 09:16:50 compute-1 systemd-sysv-generator[85134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:16:50 compute-1 systemd-rc-local-generator[85130]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:16:50 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.aifiay for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:16:50 compute-1 ceph-mon[80018]: pgmap v42: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 2.6 KiB/s wr, 155 op/s
Dec 11 09:16:51 compute-1 podman[85188]: 2025-12-11 09:16:51.062250403 +0000 UTC m=+0.028261847 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec 11 09:16:51 compute-1 podman[85188]: 2025-12-11 09:16:51.175798871 +0000 UTC m=+0.141810305 container create 84770fcf349b08bd38d1666648e2a6f98ee094683e2088d6e929ae3ede50a6ed (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay)
Dec 11 09:16:51 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/923c21376388666d9285a5f3237f4f941301bb066ed07c2178daec34b8196ac2/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec 11 09:16:51 compute-1 podman[85188]: 2025-12-11 09:16:51.261470324 +0000 UTC m=+0.227481768 container init 84770fcf349b08bd38d1666648e2a6f98ee094683e2088d6e929ae3ede50a6ed (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay)
Dec 11 09:16:51 compute-1 podman[85188]: 2025-12-11 09:16:51.266822581 +0000 UTC m=+0.232834005 container start 84770fcf349b08bd38d1666648e2a6f98ee094683e2088d6e929ae3ede50a6ed (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay)
Dec 11 09:16:51 compute-1 bash[85188]: 84770fcf349b08bd38d1666648e2a6f98ee094683e2088d6e929ae3ede50a6ed
Dec 11 09:16:51 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.aifiay for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:16:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [NOTICE] 344/091651 (2) : New worker #1 (4) forked
Dec 11 09:16:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:51 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc58c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:51 compute-1 sudo[84862]: pam_unix(sudo:session): session closed for user root
Dec 11 09:16:52 compute-1 ceph-mon[80018]: pgmap v43: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:52 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:52 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:52 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:52 compute-1 ceph-mon[80018]: Deploying daemon haproxy.nfs.cephfs.compute-0.qtoxfz on compute-0
Dec 11 09:16:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:53 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc584001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:54 compute-1 ceph-mon[80018]: pgmap v44: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:55 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:56 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:16:56 compute-1 ceph-mon[80018]: pgmap v45: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec 11 09:16:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:57 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Dec 11 09:16:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:58 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc580001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:58 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Dec 11 09:16:58 compute-1 ceph-mon[80018]: pgmap v46: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 11 09:16:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:58 compute-1 ceph-mon[80018]: osdmap e57: 3 total, 3 up, 3 in
Dec 11 09:16:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:58 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:16:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:16:59 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc584001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:16:59 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Dec 11 09:16:59 compute-1 ceph-mon[80018]: Deploying daemon haproxy.nfs.cephfs.compute-2.sgybns on compute-2
Dec 11 09:16:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:59 compute-1 ceph-mon[80018]: osdmap e58: 3 total, 3 up, 3 in
Dec 11 09:16:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:16:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:16:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:16:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:16:59 compute-1 ceph-mon[80018]: osdmap e59: 3 total, 3 up, 3 in
Dec 11 09:17:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:00 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5680016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:00 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Dec 11 09:17:00 compute-1 ceph-mon[80018]: pgmap v49: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec 11 09:17:00 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:17:00 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:17:00 compute-1 ceph-mon[80018]: osdmap e60: 3 total, 3 up, 3 in
Dec 11 09:17:01 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:01 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:01 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Dec 11 09:17:01 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 61 pg[10.0( v 56'1088 (0'0,56'1088] local-lis/les=50/51 n=178 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=61 pruub=13.277194977s) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 56'1087 mlcod 56'1087 active pruub 175.733596802s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:01 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 61 pg[10.0( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=61 pruub=13.277194977s) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 56'1087 mlcod 0'0 unknown pruub 175.733596802s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b06f28 space 0x564556abd2c0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556acf428 space 0x564556ab2760 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b24848 space 0x564556ab25c0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b06a28 space 0x564556ab2690 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556af4c08 space 0x5645569ebe20 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b06208 space 0x564556ab3390 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ade168 space 0x5645569dd120 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556af4708 space 0x5645569ebef0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b3eca8 space 0x564556ab2010 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ac5ec8 space 0x564556ab29d0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b07e28 space 0x564556ab2280 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ac5248 space 0x564556acd870 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b25248 space 0x5645569dd050 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b07d88 space 0x564556acceb0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556af5248 space 0x564556ab21b0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556aed568 space 0x564556ab2350 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ade7a8 space 0x5645569dd1f0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ad4f28 space 0x564556a2e830 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ad4708 space 0x564555d4e4f0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ad5b08 space 0x5645569dceb0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b25c48 space 0x5645569dcf80 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556923d88 space 0x564556ab20e0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b06988 space 0x564556a04830 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ae68e8 space 0x564556a045c0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556af4028 space 0x564556ab2b70 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556af4de8 space 0x5645568e2350 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b25928 space 0x564556ab24f0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556af52e8 space 0x564556ab2d10 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556b076a8 space 0x564556ab2420 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-osd[77625]: bluestore(/var/lib/ceph/osd/ceph-0).collection(10.0_head 0x564557e9a480) operator()   moving buffer(0x564556ac54c8 space 0x564556ab2aa0 0x0~1000 clean)
Dec 11 09:17:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec 11 09:17:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:02 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5800023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:02 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.11( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.7( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.12( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1f( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1e( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.10( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1b( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1d( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1c( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.19( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1a( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.18( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.6( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.5( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.3( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.b( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.8( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.4( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.9( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.d( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.a( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.e( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.c( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.f( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.2( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.14( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.15( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.13( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.16( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.17( v 56'1088 lc 0'0 (0'0,56'1088] local-lis/les=50/51 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.5( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.4( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.0( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=50/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 56'1087 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 62 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=50/50 les/c/f=51/51/0 sis=61) [0] r=0 lpr=61 pi=[50,61)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:02 compute-1 ceph-mon[80018]: pgmap v52: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Dec 11 09:17:02 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 11 09:17:02 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:02 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:02 compute-1 ceph-mon[80018]: osdmap e61: 3 total, 3 up, 3 in
Dec 11 09:17:02 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:02 compute-1 ceph-mon[80018]: osdmap e62: 3 total, 3 up, 3 in
Dec 11 09:17:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.7 deep-scrub starts
Dec 11 09:17:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.7 deep-scrub ok
Dec 11 09:17:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:03 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc584001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:03 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5680016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.11 deep-scrub starts
Dec 11 09:17:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.11 deep-scrub ok
Dec 11 09:17:03 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Dec 11 09:17:03 compute-1 ceph-mon[80018]: 8.14 scrub starts
Dec 11 09:17:03 compute-1 ceph-mon[80018]: 8.14 scrub ok
Dec 11 09:17:03 compute-1 ceph-mon[80018]: 10.7 deep-scrub starts
Dec 11 09:17:03 compute-1 ceph-mon[80018]: 10.7 deep-scrub ok
Dec 11 09:17:03 compute-1 ceph-mon[80018]: pgmap v55: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:04 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:04 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5600016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 63 pg[12.0( empty local-lis/les=54/55 n=0 ec=54/54 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=14.794135094s) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active pruub 179.809143066s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 63 pg[12.0( empty local-lis/les=54/55 n=0 ec=54/54 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=14.794135094s) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown pruub 179.809143066s@ mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 11 09:17:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 11 09:17:04 compute-1 ceph-mon[80018]: 9.16 scrub starts
Dec 11 09:17:04 compute-1 ceph-mon[80018]: 9.16 scrub ok
Dec 11 09:17:04 compute-1 ceph-mon[80018]: 10.11 deep-scrub starts
Dec 11 09:17:04 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-1 ceph-mon[80018]: 10.11 deep-scrub ok
Dec 11 09:17:04 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 11 09:17:04 compute-1 ceph-mon[80018]: osdmap e63: 3 total, 3 up, 3 in
Dec 11 09:17:04 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:04 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:04 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:04 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:04 compute-1 ceph-mon[80018]: Deploying daemon keepalived.nfs.cephfs.compute-2.qemqoo on compute-2
Dec 11 09:17:05 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.11( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.10( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.13( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.12( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.15( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.4( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.7( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.6( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.9( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.a( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.8( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.c( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.f( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.b( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.e( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.d( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.5( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.2( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.3( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1e( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1f( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1a( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1c( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1b( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.18( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.19( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.16( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.17( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.14( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1d( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1( empty local-lis/les=54/55 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.11( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.13( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.12( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.15( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.4( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.10( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.7( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.9( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.6( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.8( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.c( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.f( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.b( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.a( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.e( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.5( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.d( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.0( empty local-lis/les=63/64 n=0 ec=54/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.3( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1f( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1c( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1b( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1a( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1e( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.19( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.16( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.17( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.18( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1d( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.14( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.1( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 64 pg[12.2( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=54/54 les/c/f=55/55/0 sis=63) [0] r=0 lpr=63 pi=[54,63)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:05 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5800023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:05 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc584001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 11 09:17:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 11 09:17:05 compute-1 ceph-mon[80018]: 8.15 scrub starts
Dec 11 09:17:05 compute-1 ceph-mon[80018]: 8.15 scrub ok
Dec 11 09:17:05 compute-1 ceph-mon[80018]: 10.1e scrub starts
Dec 11 09:17:05 compute-1 ceph-mon[80018]: 10.1e scrub ok
Dec 11 09:17:05 compute-1 ceph-mon[80018]: pgmap v57: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:05 compute-1 ceph-mon[80018]: osdmap e64: 3 total, 3 up, 3 in
Dec 11 09:17:06 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:06 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5680016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:06 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1c deep-scrub starts
Dec 11 09:17:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1c deep-scrub ok
Dec 11 09:17:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:07 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5680016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:07 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc584001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 11 09:17:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 11 09:17:08 compute-1 ceph-mon[80018]: 9.17 scrub starts
Dec 11 09:17:08 compute-1 ceph-mon[80018]: 9.17 scrub ok
Dec 11 09:17:08 compute-1 ceph-mon[80018]: 10.1d scrub starts
Dec 11 09:17:08 compute-1 ceph-mon[80018]: 10.1d scrub ok
Dec 11 09:17:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:08 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5800023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 11 09:17:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 8.16 scrub starts
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 8.16 scrub ok
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 10.1c deep-scrub starts
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 10.1c deep-scrub ok
Dec 11 09:17:09 compute-1 ceph-mon[80018]: pgmap v59: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 9.15 scrub starts
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 9.15 scrub ok
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 10.1a scrub starts
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 10.1a scrub ok
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 9.11 deep-scrub starts
Dec 11 09:17:09 compute-1 ceph-mon[80018]: 9.11 deep-scrub ok
Dec 11 09:17:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec 11 09:17:09 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:17:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:09 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5680016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:09 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:09 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.951889038s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477691650s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.951850891s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477691650s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.11( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.312565804s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.838439941s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.10( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315604210s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841537476s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.11( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.312509537s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.838439941s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.951676369s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477676392s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.951655388s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477676392s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.13( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315333366s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841461182s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.13( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315305710s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841461182s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.10( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315423012s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841537476s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.12( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315253258s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841476440s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.12( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315237999s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841476440s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.951468468s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477752686s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.951451302s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477752686s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.4( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315112114s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841522217s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.4( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315097809s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841522217s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.7( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315055847s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841583252s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.7( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.315039635s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841583252s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.950822830s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477554321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.9( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314832687s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841598511s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.6( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314949989s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841613770s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.9( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314817429s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841598511s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.950798988s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477554321s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.6( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314821243s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841613770s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.8( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314839363s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841934204s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.950531006s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477645874s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.950510025s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477645874s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.8( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314811707s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841934204s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.c( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314609528s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841949463s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.c( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314596176s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841949463s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.a( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314620018s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841995239s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.950013161s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477416992s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949993134s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477416992s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.b( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314451218s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.841979980s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.b( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314433098s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841979980s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.a( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314541817s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.841995239s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949816704s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477401733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949770927s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477401733s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949499130s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477264404s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949480057s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477264404s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.e( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314240456s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842041016s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.e( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314215660s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842041016s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949297905s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477233887s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949285507s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477233887s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.2( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314422607s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842483521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.2( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314407349s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842483521s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.5( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949069977s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=62'1089 lcod 62'1090 mlcod 62'1090 active pruub 179.477310181s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.948781967s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477050781s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.948763847s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477050781s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.3( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314069748s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842391968s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.5( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.949044228s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=62'1089 lcod 62'1090 mlcod 0'0 unknown NOTIFY pruub 179.477310181s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.3( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314023972s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842391968s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1a( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313962936s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842422485s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1e( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.314029694s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842514038s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1a( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313935280s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842422485s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1e( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313938141s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842514038s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.948340416s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.476943970s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.948327065s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.476943970s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1c( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313652039s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842468262s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.947637558s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.476440430s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.19( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313694000s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842529297s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.947615623s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.476440430s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.19( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313670158s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842529297s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.18( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313652992s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842575073s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.947469711s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.476440430s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.18( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313616753s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842575073s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.17( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313531876s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842544556s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.947446823s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.476440430s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.17( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313509941s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842544556s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1c( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313321114s) [1] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842468262s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941293716s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.470520020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.947924614s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 179.477157593s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.941270828s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.470520020s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=65 pruub=8.947910309s) [2] r=-1 lpr=65 pi=[61,65)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.477157593s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1d( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313301086s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 active pruub 181.842575073s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[12.1d( empty local-lis/les=63/64 n=0 ec=63/54 lis/c=63/63 les/c/f=64/64/0 sis=65 pruub=11.313276291s) [2] r=-1 lpr=65 pi=[63,65)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.842575073s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.14( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.17( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.14( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.10( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.11( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.12( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.1( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.10( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.8( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.f( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.a( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.4( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.5( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.7( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.4( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.1b( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.1a( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.1b( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.18( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.1c( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.19( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.1d( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[9.12( empty local-lis/les=0/0 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[11.1e( empty local-lis/les=0/0 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 65 pg[8.12( empty local-lis/les=0/0 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Dec 11 09:17:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Dec 11 09:17:10 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:10 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:10 compute-1 ceph-mon[80018]: 10.1f scrub starts
Dec 11 09:17:10 compute-1 ceph-mon[80018]: 10.1f scrub ok
Dec 11 09:17:10 compute-1 ceph-mon[80018]: pgmap v60: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:10 compute-1 ceph-mon[80018]: 9.10 scrub starts
Dec 11 09:17:10 compute-1 ceph-mon[80018]: 9.10 scrub ok
Dec 11 09:17:10 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 11 09:17:10 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:17:10 compute-1 ceph-mon[80018]: osdmap e65: 3 total, 3 up, 3 in
Dec 11 09:17:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 11 09:17:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 11 09:17:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:11 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:11 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:11 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5680032f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:11 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.5( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=62'1089 lcod 62'1090 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.5( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=62'1089 lcod 62'1090 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.19( v 56'45 (0'0,56'45] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.1a( v 53'48 (0'0,53'48] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.12( v 49'12 (0'0,49'12] local-lis/les=65/66 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.11( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=65/66 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.1e( v 53'48 (0'0,53'48] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.1c( v 53'48 (0'0,53'48] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.1d( v 53'48 (0'0,53'48] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.1b( v 53'48 (0'0,53'48] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.18( v 56'45 lc 56'19 (0'0,56'45] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.1b( v 56'45 lc 56'8 (0'0,56'45] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.7( v 53'48 (0'0,53'48] local-lis/les=65/66 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.12( v 56'45 (0'0,56'45] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.4( v 56'45 (0'0,56'45] local-lis/les=65/66 n=1 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.4( v 53'48 (0'0,53'48] local-lis/les=65/66 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.5( v 53'48 (0'0,53'48] local-lis/les=65/66 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.e( v 49'12 (0'0,49'12] local-lis/les=65/66 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.a( v 49'12 (0'0,49'12] local-lis/les=65/66 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.6( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=65/66 n=1 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.10( v 61'48 lc 56'14 (0'0,61'48] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=61'48 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.f( v 49'12 lc 0'0 (0'0,49'12] local-lis/les=65/66 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.8( v 56'45 (0'0,56'45] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.f( v 53'48 (0'0,53'48] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.10( v 49'12 (0'0,49'12] local-lis/les=65/66 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.12( v 53'48 (0'0,53'48] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.14( v 62'51 lc 53'45 (0'0,62'51] local-lis/les=65/66 n=0 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=62'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.17( v 56'45 (0'0,56'45] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[8.14( v 56'45 (0'0,56'45] local-lis/les=65/66 n=0 ec=59/45 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=56'45 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.15( v 49'12 lc 49'8 (0'0,49'12] local-lis/les=65/66 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[9.d( v 49'12 lc 49'2 (0'0,49'12] local-lis/les=65/66 n=0 ec=59/48 lis/c=59/59 les/c/f=61/61/0 sis=65) [0] r=0 lpr=65 pi=[59,65)/1 crt=49'12 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:11 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 66 pg[11.1( v 53'48 (0'0,53'48] local-lis/les=65/66 n=1 ec=61/52 lis/c=61/61 les/c/f=62/62/0 sis=65) [0] r=0 lpr=65 pi=[61,65)/1 crt=53'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:12 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560002720 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:12 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 11 09:17:12 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 11 09:17:13 compute-1 ceph-mon[80018]: 10.16 deep-scrub starts
Dec 11 09:17:13 compute-1 ceph-mon[80018]: 10.16 deep-scrub ok
Dec 11 09:17:13 compute-1 ceph-mon[80018]: 9.14 scrub starts
Dec 11 09:17:13 compute-1 ceph-mon[80018]: 9.14 scrub ok
Dec 11 09:17:13 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 11 09:17:13 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.629706383s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.477905273s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.629673004s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.477905273s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.629370689s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.477737427s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.629359245s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.477737427s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.629061699s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.477600098s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.629044533s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.477600098s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.628921509s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.477554321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.628898621s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.477554321s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.628261566s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.477294922s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.628227234s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.477294922s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.627890587s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.477142334s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.627531052s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.476882935s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.627509117s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.476882935s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.627866745s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.477142334s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.627914429s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 187.477508545s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=67 pruub=13.627893448s) [1] r=-1 lpr=67 pi=[61,67)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.477508545s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.5( v 62'1091 (0'0,62'1091] local-lis/les=66/67 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=62'1091 lcod 62'1090 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 67 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:13 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:13 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5800034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091713 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:17:14 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.011639595s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.858032227s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.011611938s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.858032227s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.011253357s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857955933s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.15( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.011183739s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857955933s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.011056900s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857894897s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.13( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.011019707s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857894897s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010889053s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857894897s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010816574s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857894897s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010651588s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857818604s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010612488s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857894897s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010578156s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857894897s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010581970s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857818604s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[61,66)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.009268761s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.856750488s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.5( v 67'1095 (0'0,67'1095] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.009129524s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=62'1091 lcod 67'1094 mlcod 67'1094 active pruub 189.856750488s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010201454s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857803345s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.5( v 67'1095 (0'0,67'1095] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.009091377s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=62'1091 lcod 67'1094 mlcod 0'0 unknown NOTIFY pruub 189.856750488s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.009233475s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.856750488s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.3( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.010076523s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857803345s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.008828163s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.856674194s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.008736610s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.856658936s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.008766174s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.856674194s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.008708954s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.856658936s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.008332253s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.856445312s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.008481979s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.856628418s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.11( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.008454323s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.856628418s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.009525299s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857833862s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.009499550s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857833862s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.007985115s) [2] async=[2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.856445312s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.007963181s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.856445312s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 68 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=5 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=68 pruub=15.007835388s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.856445312s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 10.14 scrub starts
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 10.14 scrub ok
Dec 11 09:17:14 compute-1 ceph-mon[80018]: pgmap v62: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:14 compute-1 ceph-mon[80018]: osdmap e66: 3 total, 3 up, 3 in
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 11.15 scrub starts
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 11.15 scrub ok
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 9.2 scrub starts
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 9.2 scrub ok
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 8.18 scrub starts
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 8.18 scrub ok
Dec 11 09:17:14 compute-1 ceph-mon[80018]: pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:14 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec 11 09:17:14 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 11 09:17:14 compute-1 ceph-mon[80018]: osdmap e67: 3 total, 3 up, 3 in
Dec 11 09:17:14 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:14 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:14 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:14 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:14 compute-1 ceph-mon[80018]: Deploying daemon keepalived.nfs.cephfs.compute-0.ewssxv on compute-0
Dec 11 09:17:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:14 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5680032f0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:14 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 11 09:17:14 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 11 09:17:15 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=69 pruub=13.999596596s) [2] async=[2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 189.857986450s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=66/67 n=6 ec=61/50 lis/c=66/61 les/c/f=67/62/0 sis=69 pruub=13.999514580s) [2] r=-1 lpr=69 pi=[61,69)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.857986450s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:15 compute-1 ceph-mon[80018]: 11.0 scrub starts
Dec 11 09:17:15 compute-1 ceph-mon[80018]: 11.0 scrub ok
Dec 11 09:17:15 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 11 09:17:15 compute-1 ceph-mon[80018]: osdmap e68: 3 total, 3 up, 3 in
Dec 11 09:17:15 compute-1 ceph-mon[80018]: osdmap e69: 3 total, 3 up, 3 in
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=4 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 69 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=68) [1]/[0] async=[1] r=0 lpr=68 pi=[61,68)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:15 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:15 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:16 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.982015610s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.872848511s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.981948853s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.872848511s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.981859207s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.872894287s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.2( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.981802940s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.872894287s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.981275558s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.872650146s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.981220245s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.872650146s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.980815887s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.872314453s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.980760574s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.872314453s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.980722427s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.872772217s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.980570793s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.872772217s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=5 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.976696968s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.868957520s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=5 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.976659775s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.868957520s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.979845047s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.872238159s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.12( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=6 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.979802132s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.872238159s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.980335236s) [1] async=[1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 191.872879028s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 70 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=68/69 n=4 ec=61/50 lis/c=68/61 les/c/f=69/62/0 sis=70 pruub=14.980222702s) [1] r=-1 lpr=70 pi=[61,70)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 191.872879028s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:16 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 11 09:17:16 compute-1 ceph-mon[80018]: 11.c scrub starts
Dec 11 09:17:16 compute-1 ceph-mon[80018]: 11.c scrub ok
Dec 11 09:17:16 compute-1 ceph-mon[80018]: 10.0 scrub starts
Dec 11 09:17:16 compute-1 ceph-mon[80018]: 10.0 scrub ok
Dec 11 09:17:16 compute-1 ceph-mon[80018]: pgmap v67: 353 pgs: 16 unknown, 46 peering, 291 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 219 B/s, 1 objects/s recovering
Dec 11 09:17:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 11 09:17:16 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 11 09:17:16 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 11 09:17:17 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 10.13 scrub starts
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 10.13 scrub ok
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 11.b scrub starts
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 11.b scrub ok
Dec 11 09:17:17 compute-1 ceph-mon[80018]: osdmap e70: 3 total, 3 up, 3 in
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 10.c scrub starts
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 10.c scrub ok
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 11.9 scrub starts
Dec 11 09:17:17 compute-1 ceph-mon[80018]: 11.9 scrub ok
Dec 11 09:17:17 compute-1 ceph-mon[80018]: osdmap e71: 3 total, 3 up, 3 in
Dec 11 09:17:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:17 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:17 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578001230 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:17 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Dec 11 09:17:17 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Dec 11 09:17:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:18 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:18 compute-1 ceph-mon[80018]: 10.1 scrub starts
Dec 11 09:17:18 compute-1 ceph-mon[80018]: 10.1 scrub ok
Dec 11 09:17:18 compute-1 ceph-mon[80018]: 10.8 scrub starts
Dec 11 09:17:18 compute-1 ceph-mon[80018]: 10.8 scrub ok
Dec 11 09:17:18 compute-1 ceph-mon[80018]: pgmap v70: 353 pgs: 16 unknown, 46 peering, 291 active+clean; 455 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 282 B/s, 2 objects/s recovering
Dec 11 09:17:18 compute-1 ceph-mon[80018]: 11.d scrub starts
Dec 11 09:17:18 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:18 compute-1 ceph-mon[80018]: 11.d scrub ok
Dec 11 09:17:18 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.f scrub starts
Dec 11 09:17:18 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.f scrub ok
Dec 11 09:17:19 compute-1 sudo[85220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:17:19 compute-1 sudo[85220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:19 compute-1 sudo[85220]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:19 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Dec 11 09:17:19 compute-1 sudo[85245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:17:19 compute-1 sudo[85245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:17:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:19 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:19 compute-1 ceph-mon[80018]: 10.d scrub starts
Dec 11 09:17:19 compute-1 ceph-mon[80018]: 10.d scrub ok
Dec 11 09:17:19 compute-1 ceph-mon[80018]: 12.15 scrub starts
Dec 11 09:17:19 compute-1 ceph-mon[80018]: 12.15 scrub ok
Dec 11 09:17:19 compute-1 ceph-mon[80018]: 8.e scrub starts
Dec 11 09:17:19 compute-1 ceph-mon[80018]: 8.e scrub ok
Dec 11 09:17:19 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec 11 09:17:19 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:19 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:19 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.d scrub starts
Dec 11 09:17:19 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.d scrub ok
Dec 11 09:17:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:20 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578001d50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 12.3 scrub starts
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 12.3 scrub ok
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 12.f scrub starts
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 12.f scrub ok
Dec 11 09:17:20 compute-1 ceph-mon[80018]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 1.2 KiB/s wr, 105 op/s; 622 B/s, 23 objects/s recovering
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:20 compute-1 ceph-mon[80018]: Deploying daemon keepalived.nfs.cephfs.compute-1.aigyat on compute-1
Dec 11 09:17:20 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 11 09:17:20 compute-1 ceph-mon[80018]: osdmap e72: 3 total, 3 up, 3 in
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 9.c scrub starts
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 12.d scrub starts
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 9.c scrub ok
Dec 11 09:17:20 compute-1 ceph-mon[80018]: 12.d scrub ok
Dec 11 09:17:20 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Dec 11 09:17:20 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Dec 11 09:17:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:21 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:21 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 12.2 scrub starts
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 12.2 scrub ok
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 8.6 scrub starts
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 8.6 scrub ok
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 11.2 scrub starts
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 11.2 scrub ok
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 12.5 scrub starts
Dec 11 09:17:21 compute-1 ceph-mon[80018]: 12.5 scrub ok
Dec 11 09:17:21 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec 11 09:17:21 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:21 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:21 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Dec 11 09:17:21 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Dec 11 09:17:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:22 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.429674149s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 195.477966309s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.429369926s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 195.477981567s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.429338455s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.477981567s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.429120064s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.477966309s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.4( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.428596497s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=62'1089 lcod 62'1090 mlcod 62'1090 active pruub 195.477706909s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.4( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.428563118s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=62'1089 lcod 62'1090 mlcod 0'0 unknown NOTIFY pruub 195.477706909s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.427834511s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 195.477401733s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 73 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=73 pruub=12.427817345s) [2] r=-1 lpr=73 pi=[61,73)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.477401733s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:22 compute-1 ceph-mon[80018]: pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.0 KiB/s wr, 87 op/s; 516 B/s, 19 objects/s recovering
Dec 11 09:17:22 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 11 09:17:22 compute-1 ceph-mon[80018]: osdmap e73: 3 total, 3 up, 3 in
Dec 11 09:17:22 compute-1 ceph-mon[80018]: 8.1 scrub starts
Dec 11 09:17:22 compute-1 ceph-mon[80018]: 8.1 scrub ok
Dec 11 09:17:22 compute-1 ceph-mon[80018]: 8.1f scrub starts
Dec 11 09:17:22 compute-1 ceph-mon[80018]: 8.1f scrub ok
Dec 11 09:17:22 compute-1 ceph-mon[80018]: 12.0 scrub starts
Dec 11 09:17:22 compute-1 ceph-mon[80018]: 12.0 scrub ok
Dec 11 09:17:22 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.4( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=62'1089 lcod 62'1090 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.4( v 62'1091 (0'0,62'1091] local-lis/les=61/62 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=62'1089 lcod 62'1090 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:22 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 74 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:22 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Dec 11 09:17:22 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Dec 11 09:17:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:23 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578001d50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:23 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Dec 11 09:17:23 compute-1 ceph-mon[80018]: osdmap e74: 3 total, 3 up, 3 in
Dec 11 09:17:23 compute-1 ceph-mon[80018]: 9.0 scrub starts
Dec 11 09:17:23 compute-1 ceph-mon[80018]: 9.0 scrub ok
Dec 11 09:17:23 compute-1 ceph-mon[80018]: 11.8 scrub starts
Dec 11 09:17:23 compute-1 ceph-mon[80018]: 11.8 scrub ok
Dec 11 09:17:23 compute-1 ceph-mon[80018]: 12.1f scrub starts
Dec 11 09:17:23 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec 11 09:17:23 compute-1 ceph-mon[80018]: 12.1f scrub ok
Dec 11 09:17:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:23 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:23 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 75 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:23 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 75 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:23 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 75 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[61,74)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:23 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 75 pg[10.4( v 62'1091 (0'0,62'1091] local-lis/les=74/75 n=6 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=74) [2]/[0] async=[2] r=0 lpr=74 pi=[61,74)/1 crt=62'1091 lcod 62'1090 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:24 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=5 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.549734116s) [2] async=[2] r=-1 lpr=76 pi=[61,76)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 200.449234009s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=5 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.549966812s) [2] async=[2] r=-1 lpr=76 pi=[61,76)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 200.449645996s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=5 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.549880028s) [2] r=-1 lpr=76 pi=[61,76)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.449645996s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.4( v 75'1092 (0'0,75'1092] local-lis/les=74/75 n=6 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.552930832s) [2] async=[2] r=-1 lpr=76 pi=[61,76)/1 crt=62'1091 lcod 62'1091 mlcod 62'1091 active pruub 200.452911377s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.4( v 75'1092 (0'0,75'1092] local-lis/les=74/75 n=6 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.552840233s) [2] r=-1 lpr=76 pi=[61,76)/1 crt=62'1091 lcod 62'1091 mlcod 0'0 unknown NOTIFY pruub 200.452911377s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.14( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=5 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.549248695s) [2] r=-1 lpr=76 pi=[61,76)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.449234009s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=7 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.548988342s) [2] async=[2] r=-1 lpr=76 pi=[61,76)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 200.449218750s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:24 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 76 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=74/75 n=7 ec=61/50 lis/c=74/61 les/c/f=75/62/0 sis=76 pruub=15.548659325s) [2] r=-1 lpr=76 pi=[61,76)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.449218750s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:24 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:24 compute-1 ceph-mon[80018]: pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 1.1 KiB/s wr, 89 op/s; 527 B/s, 20 objects/s recovering
Dec 11 09:17:24 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 11 09:17:24 compute-1 ceph-mon[80018]: osdmap e75: 3 total, 3 up, 3 in
Dec 11 09:17:24 compute-1 ceph-mon[80018]: 9.1 scrub starts
Dec 11 09:17:24 compute-1 ceph-mon[80018]: 9.1 scrub ok
Dec 11 09:17:24 compute-1 ceph-mon[80018]: osdmap e76: 3 total, 3 up, 3 in
Dec 11 09:17:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:24 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:17:25 compute-1 podman[85312]: 2025-12-11 09:17:25.084993359 +0000 UTC m=+5.425514175 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 11 09:17:25 compute-1 podman[85312]: 2025-12-11 09:17:25.124723735 +0000 UTC m=+5.465244541 container create 7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c (image=quay.io/ceph/keepalived:2.2.4, name=goofy_hugle, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vendor=Red Hat, Inc., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, version=2.2.4, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Dec 11 09:17:25 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Dec 11 09:17:25 compute-1 systemd[1]: Started libpod-conmon-7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c.scope.
Dec 11 09:17:25 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:17:25 compute-1 podman[85312]: 2025-12-11 09:17:25.198909979 +0000 UTC m=+5.539430765 container init 7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c (image=quay.io/ceph/keepalived:2.2.4, name=goofy_hugle, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, description=keepalived for Ceph, architecture=x86_64, name=keepalived, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 11 09:17:25 compute-1 podman[85312]: 2025-12-11 09:17:25.206162314 +0000 UTC m=+5.546683100 container start 7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c (image=quay.io/ceph/keepalived:2.2.4, name=goofy_hugle, version=2.2.4, name=keepalived, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Dec 11 09:17:25 compute-1 podman[85312]: 2025-12-11 09:17:25.210012413 +0000 UTC m=+5.550533199 container attach 7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c (image=quay.io/ceph/keepalived:2.2.4, name=goofy_hugle, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64)
Dec 11 09:17:25 compute-1 systemd[1]: libpod-7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c.scope: Deactivated successfully.
Dec 11 09:17:25 compute-1 goofy_hugle[85410]: 0 0
Dec 11 09:17:25 compute-1 conmon[85410]: conmon 7bc1ba7fd3c21abc1dc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c.scope/container/memory.events
Dec 11 09:17:25 compute-1 podman[85312]: 2025-12-11 09:17:25.213514833 +0000 UTC m=+5.554035619 container died 7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c (image=quay.io/ceph/keepalived:2.2.4, name=goofy_hugle, vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, version=2.2.4, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=)
Dec 11 09:17:25 compute-1 systemd[1]: var-lib-containers-storage-overlay-b9b15bc2fb86524dea688540c2fde88b12bc3a6ba2ae13ddd114daa22723b4a7-merged.mount: Deactivated successfully.
Dec 11 09:17:25 compute-1 podman[85312]: 2025-12-11 09:17:25.280585034 +0000 UTC m=+5.621105820 container remove 7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c (image=quay.io/ceph/keepalived:2.2.4, name=goofy_hugle, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, version=2.2.4)
Dec 11 09:17:25 compute-1 systemd[1]: libpod-conmon-7bc1ba7fd3c21abc1dc86e601a10109714c4094e6dcce8550a6be34be9c4e13c.scope: Deactivated successfully.
Dec 11 09:17:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:25 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:25 compute-1 systemd[1]: Reloading.
Dec 11 09:17:25 compute-1 systemd-rc-local-generator[85457]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:25 compute-1 systemd-sysv-generator[85462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:25 compute-1 systemd[1]: Reloading.
Dec 11 09:17:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:25 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578001d50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:25 compute-1 systemd-rc-local-generator[85499]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 11 09:17:25 compute-1 systemd-sysv-generator[85503]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 11 09:17:25 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.aigyat for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:26 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:26 compute-1 podman[85555]: 2025-12-11 09:17:26.200970905 +0000 UTC m=+0.092675048 container create 41c18657fbf15d474ce33c6dd515a50eabf8b7f96b8ef7241f573dfb1b064354 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, release=1793, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Dec 11 09:17:26 compute-1 ceph-mon[80018]: 8.0 scrub starts
Dec 11 09:17:26 compute-1 ceph-mon[80018]: 8.0 scrub ok
Dec 11 09:17:26 compute-1 ceph-mon[80018]: pgmap v80: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:26 compute-1 ceph-mon[80018]: osdmap e77: 3 total, 3 up, 3 in
Dec 11 09:17:26 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Dec 11 09:17:26 compute-1 podman[85555]: 2025-12-11 09:17:26.128913273 +0000 UTC m=+0.020617446 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec 11 09:17:26 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40d64fea57119f64ca4127a04120a7f5f96f406a8090cab20cf98d3957da3102/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:17:26 compute-1 podman[85555]: 2025-12-11 09:17:26.281884119 +0000 UTC m=+0.173588292 container init 41c18657fbf15d474ce33c6dd515a50eabf8b7f96b8ef7241f573dfb1b064354 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1793, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64)
Dec 11 09:17:26 compute-1 podman[85555]: 2025-12-11 09:17:26.286335925 +0000 UTC m=+0.178040078 container start 41c18657fbf15d474ce33c6dd515a50eabf8b7f96b8ef7241f573dfb1b064354 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, distribution-scope=public, architecture=x86_64)
Dec 11 09:17:26 compute-1 bash[85555]: 41c18657fbf15d474ce33c6dd515a50eabf8b7f96b8ef7241f573dfb1b064354
Dec 11 09:17:26 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.aigyat for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: Running on Linux 5.14.0-648.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025 (built for Linux 5.14.0)
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: Configuration file /etc/keepalived/keepalived.conf
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: Starting VRRP child process, pid=4
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: Startup complete
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: (VI_0) Entering BACKUP STATE (init)
Dec 11 09:17:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:26 2025: VRRP_Script(check_backend) succeeded
Dec 11 09:17:26 compute-1 sudo[85245]: pam_unix(sudo:session): session closed for user root
Dec 11 09:17:26 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:27 compute-1 ceph-mon[80018]: 8.7 scrub starts
Dec 11 09:17:27 compute-1 ceph-mon[80018]: 8.7 scrub ok
Dec 11 09:17:27 compute-1 ceph-mon[80018]: 10.4 deep-scrub starts
Dec 11 09:17:27 compute-1 ceph-mon[80018]: 10.4 deep-scrub ok
Dec 11 09:17:27 compute-1 ceph-mon[80018]: osdmap e78: 3 total, 3 up, 3 in
Dec 11 09:17:27 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:27 compute-1 ceph-mon[80018]: Deploying daemon alertmanager.compute-0 on compute-0
Dec 11 09:17:27 compute-1 ceph-mon[80018]: 11.6 scrub starts
Dec 11 09:17:27 compute-1 ceph-mon[80018]: 11.6 scrub ok
Dec 11 09:17:27 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Dec 11 09:17:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:27 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:27 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:27 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:17:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:27 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:17:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:28 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:28 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:17:28 compute-1 ceph-mon[80018]: pgmap v83: 353 pgs: 4 remapped+peering, 349 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:28 compute-1 ceph-mon[80018]: osdmap e79: 3 total, 3 up, 3 in
Dec 11 09:17:28 compute-1 ceph-mon[80018]: 9.4 scrub starts
Dec 11 09:17:28 compute-1 ceph-mon[80018]: 9.4 scrub ok
Dec 11 09:17:28 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Dec 11 09:17:28 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Dec 11 09:17:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:29 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:29 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:29 compute-1 ceph-mon[80018]: 11.18 deep-scrub starts
Dec 11 09:17:29 compute-1 ceph-mon[80018]: 11.18 deep-scrub ok
Dec 11 09:17:29 compute-1 ceph-mon[80018]: 8.b deep-scrub starts
Dec 11 09:17:29 compute-1 ceph-mon[80018]: 8.b deep-scrub ok
Dec 11 09:17:29 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec 11 09:17:29 compute-1 ceph-mon[80018]: 12.1b scrub starts
Dec 11 09:17:29 compute-1 ceph-mon[80018]: 12.1b scrub ok
Dec 11 09:17:29 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Dec 11 09:17:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:29 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:29 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Dec 11 09:17:29 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Dec 11 09:17:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:29 2025: (VI_0) Entering MASTER STATE
Dec 11 09:17:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:29 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Dec 11 09:17:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat[85570]: Thu Dec 11 09:17:29 2025: (VI_0) Entering BACKUP STATE
Dec 11 09:17:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:30 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5780031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 80 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [0] r=0 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 80 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [0] r=0 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 80 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [0] r=0 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 80 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=80) [0] r=0 lpr=80 pi=[70,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:30 compute-1 ceph-mon[80018]: pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 8 objects/s recovering
Dec 11 09:17:30 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 11 09:17:30 compute-1 ceph-mon[80018]: osdmap e80: 3 total, 3 up, 3 in
Dec 11 09:17:30 compute-1 ceph-mon[80018]: 9.1a scrub starts
Dec 11 09:17:30 compute-1 ceph-mon[80018]: 9.1a scrub ok
Dec 11 09:17:30 compute-1 ceph-mon[80018]: 12.11 deep-scrub starts
Dec 11 09:17:30 compute-1 ceph-mon[80018]: 12.11 deep-scrub ok
Dec 11 09:17:30 compute-1 ceph-mon[80018]: 12.16 scrub starts
Dec 11 09:17:30 compute-1 ceph-mon[80018]: 12.16 scrub ok
Dec 11 09:17:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.16( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.6( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:30 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 81 pg[10.1e( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=81) [0]/[1] r=-1 lpr=81 pi=[70,81)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:30 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Dec 11 09:17:30 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Dec 11 09:17:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:31 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:31 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:17:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:31 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:31 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Dec 11 09:17:31 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Dec 11 09:17:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:32 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Dec 11 09:17:32 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 82 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=82) [0] r=0 lpr=82 pi=[68,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:32 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 82 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=82) [0] r=0 lpr=82 pi=[68,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:32 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 82 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=82) [0] r=0 lpr=82 pi=[68,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:32 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 82 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=82) [0] r=0 lpr=82 pi=[68,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:32 compute-1 ceph-mon[80018]: 9.1b scrub starts
Dec 11 09:17:32 compute-1 ceph-mon[80018]: 9.1b scrub ok
Dec 11 09:17:32 compute-1 ceph-mon[80018]: 12.1e scrub starts
Dec 11 09:17:32 compute-1 ceph-mon[80018]: 12.1e scrub ok
Dec 11 09:17:32 compute-1 ceph-mon[80018]: pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 150 B/s, 8 objects/s recovering
Dec 11 09:17:32 compute-1 ceph-mon[80018]: osdmap e81: 3 total, 3 up, 3 in
Dec 11 09:17:32 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec 11 09:17:32 compute-1 ceph-mon[80018]: 12.14 scrub starts
Dec 11 09:17:32 compute-1 ceph-mon[80018]: 12.14 scrub ok
Dec 11 09:17:32 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 11 09:17:32 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 11 09:17:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:33 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5780031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:33 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:33 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.7( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.17( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=83 pruub=8.928080559s) [1] r=-1 lpr=83 pi=[61,83)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 203.477615356s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=83 pruub=8.928037643s) [1] r=-1 lpr=83 pi=[61,83)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.477615356s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=83 pruub=8.927627563s) [1] r=-1 lpr=83 pi=[61,83)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 203.477600098s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=83 pruub=8.927607536s) [1] r=-1 lpr=83 pi=[61,83)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.477600098s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:33 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 83 pg[10.f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=83) [0]/[2] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:33 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 11 09:17:33 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 11 09:17:34 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:34 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:34 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 11 09:17:34 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 11 09:17:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:35 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:35 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1e deep-scrub starts
Dec 11 09:17:35 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1e deep-scrub ok
Dec 11 09:17:36 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:36 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 8.1a scrub starts
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 11.19 deep-scrub starts
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 11.19 deep-scrub ok
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 8.1a scrub ok
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 12.1 scrub starts
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 12.1 scrub ok
Dec 11 09:17:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 11 09:17:36 compute-1 ceph-mon[80018]: osdmap e82: 3 total, 3 up, 3 in
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 11.16 scrub starts
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 11.16 scrub ok
Dec 11 09:17:36 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 9.12 scrub starts
Dec 11 09:17:36 compute-1 ceph-mon[80018]: 9.12 scrub ok
Dec 11 09:17:36 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=0 lpr=84 pi=[61,84)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=0 lpr=84 pi=[61,84)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=0 lpr=84 pi=[61,84)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] r=0 lpr=84 pi=[61,84)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.e( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=5 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.6( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=6 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:36 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 84 pg[10.16( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=4 ec=61/50 lis/c=81/70 les/c/f=82/71/0 sis=83) [0] r=0 lpr=83 pi=[70,83)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:36 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:36 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 11 09:17:36 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 11 09:17:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:37 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:37 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:37 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 11 09:17:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091737 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:17:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:38 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:38 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 124 B/s, 7 objects/s recovering
Dec 11 09:17:38 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 12.1d scrub starts
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 12.1d scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 8.19 scrub starts
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 8.19 scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: osdmap e83: 3 total, 3 up, 3 in
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 9.18 scrub starts
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 9.18 scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: pgmap v92: 353 pgs: 4 unknown, 4 active+remapped, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 11.1a scrub starts
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 11.1a scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 12.18 scrub starts
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 12.18 scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 11.1e deep-scrub starts
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 11.1e deep-scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: osdmap e84: 3 total, 3 up, 3 in
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 10.15 scrub starts
Dec 11 09:17:38 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:38 compute-1 ceph-mon[80018]: 10.15 scrub ok
Dec 11 09:17:38 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=84/85 n=5 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[61,84)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:38 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 85 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=84/85 n=7 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[61,84)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:39 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 8.1c scrub starts
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 8.1c scrub ok
Dec 11 09:17:39 compute-1 ceph-mon[80018]: pgmap v94: 353 pgs: 4 unknown, 4 active+remapped, 345 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 11.1c scrub starts
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 11.1c scrub ok
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.19 scrub starts
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.19 scrub ok
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.13 scrub starts
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.13 scrub ok
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 11.1d scrub starts
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 11.1d scrub ok
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-1 ceph-mon[80018]: osdmap e85: 3 total, 3 up, 3 in
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-1 ceph-mon[80018]: Regenerating cephadm self-signed grafana TLS certificates
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec 11 09:17:39 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:39 compute-1 ceph-mon[80018]: Deploying daemon grafana.compute-0 on compute-0
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.1e scrub starts
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.1e scrub ok
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.1d scrub starts
Dec 11 09:17:39 compute-1 ceph-mon[80018]: 9.1d scrub ok
Dec 11 09:17:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:39 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:40 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=84/85 n=5 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86 pruub=14.796906471s) [1] async=[1] r=-1 lpr=86 pi=[61,86)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 215.649200439s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.18( v 56'1088 (0'0,56'1088] local-lis/les=84/85 n=5 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86 pruub=14.796839714s) [1] r=-1 lpr=86 pi=[61,86)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.649200439s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=84/85 n=7 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86 pruub=14.796854973s) [1] async=[1] r=-1 lpr=86 pi=[61,86)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 215.649246216s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.8( v 56'1088 (0'0,56'1088] local-lis/les=84/85 n=7 ec=61/50 lis/c=84/61 les/c/f=85/62/0 sis=86 pruub=14.796813965s) [1] r=-1 lpr=86 pi=[61,86)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 215.649246216s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.17( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=6 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:40 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 86 pg[10.7( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=5 ec=61/50 lis/c=83/68 les/c/f=84/69/0 sis=85) [0] r=0 lpr=85 pi=[68,85)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:40 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:40 compute-1 ceph-mon[80018]: pgmap v96: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 91 B/s, 5 objects/s recovering
Dec 11 09:17:40 compute-1 ceph-mon[80018]: 12.1a scrub starts
Dec 11 09:17:40 compute-1 ceph-mon[80018]: 12.1a scrub ok
Dec 11 09:17:40 compute-1 ceph-mon[80018]: 9.1f scrub starts
Dec 11 09:17:40 compute-1 ceph-mon[80018]: 9.1f scrub ok
Dec 11 09:17:40 compute-1 ceph-mon[80018]: osdmap e86: 3 total, 3 up, 3 in
Dec 11 09:17:40 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 11 09:17:40 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 11 09:17:41 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Dec 11 09:17:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:41 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:41 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.669502) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661669752, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7238, "num_deletes": 255, "total_data_size": 20420657, "memory_usage": 21296272, "flush_reason": "Manual Compaction"}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 11 09:17:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:41 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661749595, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12615128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 244, "largest_seqno": 7243, "table_properties": {"data_size": 12586925, "index_size": 17912, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 89824, "raw_average_key_size": 24, "raw_value_size": 12516212, "raw_average_value_size": 3397, "num_data_blocks": 794, "num_entries": 3684, "num_filter_entries": 3684, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444500, "oldest_key_time": 1765444500, "file_creation_time": 1765444661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 80280 microseconds, and 29077 cpu microseconds.
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.749791) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12615128 bytes OK
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.749868) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.751457) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.751472) EVENT_LOG_v1 {"time_micros": 1765444661751468, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.751487) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 20381076, prev total WAL file size 20381076, number of live WAL files 2.
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.755323) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(12MB) 8(1648B)]
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661755698, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12616776, "oldest_snapshot_seqno": -1}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3433 keys, 12611780 bytes, temperature: kUnknown
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661860513, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12611780, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12584167, "index_size": 17903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 85646, "raw_average_key_size": 24, "raw_value_size": 12516492, "raw_average_value_size": 3645, "num_data_blocks": 794, "num_entries": 3433, "num_filter_entries": 3433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444500, "oldest_key_time": 0, "file_creation_time": 1765444661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.860759) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12611780 bytes
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.862832) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.3 rd, 120.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(12.0, 0.0 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3689, records dropped: 256 output_compression: NoCompression
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.862885) EVENT_LOG_v1 {"time_micros": 1765444661862862, "job": 4, "event": "compaction_finished", "compaction_time_micros": 104882, "compaction_time_cpu_micros": 55999, "output_level": 6, "num_output_files": 1, "total_output_size": 12611780, "num_input_records": 3689, "num_output_records": 3433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661867809, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444661867956, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 11 09:17:41 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:17:41.755071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:17:41 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 11 09:17:42 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 11 09:17:42 compute-1 ceph-mon[80018]: 11.13 scrub starts
Dec 11 09:17:42 compute-1 ceph-mon[80018]: 11.13 scrub ok
Dec 11 09:17:42 compute-1 ceph-mon[80018]: pgmap v98: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 12 op/s; 146 B/s, 5 objects/s recovering
Dec 11 09:17:42 compute-1 ceph-mon[80018]: 8.1e scrub starts
Dec 11 09:17:42 compute-1 ceph-mon[80018]: 8.1e scrub ok
Dec 11 09:17:42 compute-1 ceph-mon[80018]: 10.17 scrub starts
Dec 11 09:17:42 compute-1 ceph-mon[80018]: 10.17 scrub ok
Dec 11 09:17:42 compute-1 ceph-mon[80018]: osdmap e87: 3 total, 3 up, 3 in
Dec 11 09:17:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:42 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:43 compute-1 ceph-mon[80018]: 9.5 scrub starts
Dec 11 09:17:43 compute-1 ceph-mon[80018]: 9.5 scrub ok
Dec 11 09:17:43 compute-1 ceph-mon[80018]: 9.1c scrub starts
Dec 11 09:17:43 compute-1 ceph-mon[80018]: 9.1c scrub ok
Dec 11 09:17:43 compute-1 ceph-mon[80018]: 10.f scrub starts
Dec 11 09:17:43 compute-1 ceph-mon[80018]: 10.f scrub ok
Dec 11 09:17:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:43 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:43 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:44 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 11 09:17:44 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 11 09:17:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:44 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:44 compute-1 ceph-mon[80018]: 11.a scrub starts
Dec 11 09:17:44 compute-1 ceph-mon[80018]: 11.a scrub ok
Dec 11 09:17:44 compute-1 ceph-mon[80018]: pgmap v100: 353 pgs: 2 remapped+peering, 4 active+remapped, 347 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 0 B/s wr, 12 op/s; 146 B/s, 5 objects/s recovering
Dec 11 09:17:44 compute-1 ceph-mon[80018]: 8.1d scrub starts
Dec 11 09:17:44 compute-1 ceph-mon[80018]: 8.1d scrub ok
Dec 11 09:17:44 compute-1 ceph-mon[80018]: 11.1f scrub starts
Dec 11 09:17:44 compute-1 ceph-mon[80018]: 11.1f scrub ok
Dec 11 09:17:45 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 11 09:17:45 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 11 09:17:45 compute-1 ceph-mon[80018]: 8.5 scrub starts
Dec 11 09:17:45 compute-1 ceph-mon[80018]: 8.5 scrub ok
Dec 11 09:17:45 compute-1 ceph-mon[80018]: 11.1b scrub starts
Dec 11 09:17:45 compute-1 ceph-mon[80018]: 11.1b scrub ok
Dec 11 09:17:45 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec 11 09:17:45 compute-1 ceph-mon[80018]: 11.10 scrub starts
Dec 11 09:17:45 compute-1 ceph-mon[80018]: 11.10 scrub ok
Dec 11 09:17:45 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Dec 11 09:17:45 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 88 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=88) [0] r=0 lpr=88 pi=[68,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:45 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 88 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=88) [0] r=0 lpr=88 pi=[69,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:45 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:45 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578003ef0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:46 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 11 09:17:46 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 11 09:17:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:46 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:46 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Dec 11 09:17:46 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 89 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=89) [0]/[2] r=-1 lpr=89 pi=[69,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:46 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 89 pg[10.9( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=69/69 les/c/f=70/70/0 sis=89) [0]/[2] r=-1 lpr=89 pi=[69,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:46 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 89 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=89) [0]/[2] r=-1 lpr=89 pi=[68,89)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:46 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 89 pg[10.19( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=89) [0]/[2] r=-1 lpr=89 pi=[68,89)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:46 compute-1 ceph-mon[80018]: pgmap v101: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:46 compute-1 ceph-mon[80018]: 8.9 scrub starts
Dec 11 09:17:46 compute-1 ceph-mon[80018]: 8.9 scrub ok
Dec 11 09:17:46 compute-1 ceph-mon[80018]: 8.12 scrub starts
Dec 11 09:17:46 compute-1 ceph-mon[80018]: 8.12 scrub ok
Dec 11 09:17:46 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 11 09:17:46 compute-1 ceph-mon[80018]: osdmap e88: 3 total, 3 up, 3 in
Dec 11 09:17:46 compute-1 ceph-mon[80018]: 8.13 scrub starts
Dec 11 09:17:46 compute-1 ceph-mon[80018]: 8.13 scrub ok
Dec 11 09:17:46 compute-1 ceph-mon[80018]: osdmap e89: 3 total, 3 up, 3 in
Dec 11 09:17:46 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:47 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 11 09:17:47 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 11 09:17:47 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Dec 11 09:17:47 compute-1 ceph-mon[80018]: 9.b scrub starts
Dec 11 09:17:47 compute-1 ceph-mon[80018]: 9.b scrub ok
Dec 11 09:17:47 compute-1 ceph-mon[80018]: 8.4 scrub starts
Dec 11 09:17:47 compute-1 ceph-mon[80018]: 8.4 scrub ok
Dec 11 09:17:47 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec 11 09:17:47 compute-1 ceph-mon[80018]: 11.11 scrub starts
Dec 11 09:17:47 compute-1 ceph-mon[80018]: 11.11 scrub ok
Dec 11 09:17:47 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 11 09:17:47 compute-1 ceph-mon[80018]: osdmap e90: 3 total, 3 up, 3 in
Dec 11 09:17:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:47 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 90 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 90 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=90) [0] r=0 lpr=90 pi=[70,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:48 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 11 09:17:48 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 11 09:17:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:48 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc580001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:48 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=89/69 les/c/f=90/70/0 sis=91) [0] r=0 lpr=91 pi=[69,91)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=89/69 les/c/f=90/70/0 sis=91) [0] r=0 lpr=91 pi=[69,91)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[70,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[70,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[70,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:48 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 91 pg[10.1a( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=70/70 les/c/f=71/71/0 sis=91) [0]/[1] r=-1 lpr=91 pi=[70,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:48 compute-1 ceph-mon[80018]: pgmap v104: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:48 compute-1 ceph-mon[80018]: 8.a scrub starts
Dec 11 09:17:48 compute-1 ceph-mon[80018]: 8.a scrub ok
Dec 11 09:17:48 compute-1 ceph-mon[80018]: 11.5 scrub starts
Dec 11 09:17:48 compute-1 ceph-mon[80018]: 11.5 scrub ok
Dec 11 09:17:48 compute-1 ceph-mon[80018]: 12.12 scrub starts
Dec 11 09:17:48 compute-1 ceph-mon[80018]: 12.12 scrub ok
Dec 11 09:17:49 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 11 09:17:49 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 11 09:17:49 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Dec 11 09:17:49 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 92 pg[10.9( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=6 ec=61/50 lis/c=89/69 les/c/f=90/70/0 sis=91) [0] r=0 lpr=91 pi=[69,91)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:49 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 92 pg[10.b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=92) [0] r=0 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:49 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 92 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=92) [0] r=0 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:49 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 92 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=5 ec=61/50 lis/c=89/68 les/c/f=90/69/0 sis=91) [0] r=0 lpr=91 pi=[68,91)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 12.13 scrub starts
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 12.13 scrub ok
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 11.4 scrub starts
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 11.4 scrub ok
Dec 11 09:17:49 compute-1 ceph-mon[80018]: osdmap e91: 3 total, 3 up, 3 in
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 12.6 scrub starts
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 12.6 scrub ok
Dec 11 09:17:49 compute-1 ceph-mon[80018]: pgmap v107: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec 11 09:17:49 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 9.e scrub starts
Dec 11 09:17:49 compute-1 ceph-mon[80018]: 9.e scrub ok
Dec 11 09:17:49 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 11 09:17:49 compute-1 ceph-mon[80018]: osdmap e92: 3 total, 3 up, 3 in
Dec 11 09:17:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:49 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:49 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:50 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.a deep-scrub starts
Dec 11 09:17:50 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.a deep-scrub ok
Dec 11 09:17:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:50 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:50 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93) [0] r=0 lpr=93 pi=[70,93)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93) [0] r=0 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.1b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93) [0] r=0 lpr=93 pi=[70,93)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.b( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=68/68 les/c/f=69/69/0 sis=93) [0]/[2] r=-1 lpr=93 pi=[68,93)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 93 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=4 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93) [0] r=0 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:50 compute-1 ceph-mon[80018]: 12.9 scrub starts
Dec 11 09:17:50 compute-1 ceph-mon[80018]: 12.9 scrub ok
Dec 11 09:17:50 compute-1 ceph-mon[80018]: 12.b deep-scrub starts
Dec 11 09:17:50 compute-1 ceph-mon[80018]: 12.b deep-scrub ok
Dec 11 09:17:50 compute-1 ceph-mon[80018]: 9.a deep-scrub starts
Dec 11 09:17:50 compute-1 ceph-mon[80018]: 9.a deep-scrub ok
Dec 11 09:17:51 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 11 09:17:51 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 11 09:17:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:51 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc580001ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:51 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:51 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc55c003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:51 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 12.4 scrub starts
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 12.4 scrub ok
Dec 11 09:17:51 compute-1 ceph-mon[80018]: osdmap e93: 3 total, 3 up, 3 in
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 12.10 deep-scrub starts
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 12.10 deep-scrub ok
Dec 11 09:17:51 compute-1 ceph-mon[80018]: pgmap v110: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 2 objects/s recovering
Dec 11 09:17:51 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 8.f scrub starts
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 8.f scrub ok
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 8.8 scrub starts
Dec 11 09:17:51 compute-1 ceph-mon[80018]: 8.8 scrub ok
Dec 11 09:17:51 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 94 pg[10.1c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=94) [0] r=0 lpr=94 pi=[76,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:51 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 94 pg[10.c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=94) [0] r=0 lpr=94 pi=[76,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:51 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 94 pg[10.a( v 56'1088 (0'0,56'1088] local-lis/les=93/94 n=6 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93) [0] r=0 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:51 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 94 pg[10.1a( v 56'1088 (0'0,56'1088] local-lis/les=93/94 n=4 ec=61/50 lis/c=91/70 les/c/f=92/71/0 sis=93) [0] r=0 lpr=93 pi=[70,93)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:52 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 11 09:17:52 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 11 09:17:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:52 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:53 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.12 deep-scrub starts
Dec 11 09:17:53 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.12 deep-scrub ok
Dec 11 09:17:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:53 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:53 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:54 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 11 09:17:54 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 11 09:17:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:54 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:54 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=6 ec=61/50 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.1c( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=76/76 les/c/f=77/77/0 sis=95) [0]/[2] r=-1 lpr=95 pi=[76,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 95 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:54 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 11 09:17:54 compute-1 ceph-mon[80018]: osdmap e94: 3 total, 3 up, 3 in
Dec 11 09:17:54 compute-1 ceph-mon[80018]: 12.c scrub starts
Dec 11 09:17:54 compute-1 ceph-mon[80018]: 12.c scrub ok
Dec 11 09:17:54 compute-1 ceph-mon[80018]: 8.3 scrub starts
Dec 11 09:17:54 compute-1 ceph-mon[80018]: 8.3 scrub ok
Dec 11 09:17:54 compute-1 ceph-mon[80018]: 11.f scrub starts
Dec 11 09:17:54 compute-1 ceph-mon[80018]: 11.f scrub ok
Dec 11 09:17:55 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Dec 11 09:17:55 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Dec 11 09:17:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:55 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:55 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Dec 11 09:17:55 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 96 pg[10.b( v 56'1088 (0'0,56'1088] local-lis/les=95/96 n=6 ec=61/50 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:55 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 96 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=95/96 n=5 ec=61/50 lis/c=93/68 les/c/f=94/69/0 sis=95) [0] r=0 lpr=95 pi=[68,95)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.a scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.a scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: pgmap v112: 353 pgs: 2 active+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.7 scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.7 scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 11.12 deep-scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 11.12 deep-scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.e scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.17 scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.17 scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 8.17 scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 8.17 scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.e scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: osdmap e95: 3 total, 3 up, 3 in
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.8 scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 12.8 scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-1 ceph-mon[80018]: pgmap v114: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 58 B/s, 1 objects/s recovering
Dec 11 09:17:55 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:55 compute-1 ceph-mon[80018]: Deploying daemon haproxy.rgw.default.compute-0.paephv on compute-0
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 8.11 scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 8.11 scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 11.1 deep-scrub starts
Dec 11 09:17:55 compute-1 ceph-mon[80018]: 11.1 deep-scrub ok
Dec 11 09:17:55 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 11 09:17:55 compute-1 ceph-mon[80018]: osdmap e96: 3 total, 3 up, 3 in
Dec 11 09:17:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:55 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:56 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 11 09:17:56 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 11 09:17:56 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:56 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:56 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Dec 11 09:17:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 97 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=96) [0] r=0 lpr=97 pi=[78,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 97 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=7 ec=61/50 lis/c=95/76 les/c/f=96/77/0 sis=97) [0] r=0 lpr=97 pi=[76,97)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 97 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=96) [0] r=0 lpr=97 pi=[78,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 97 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=7 ec=61/50 lis/c=95/76 les/c/f=96/77/0 sis=97) [0] r=0 lpr=97 pi=[76,97)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 97 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=95/76 les/c/f=96/77/0 sis=97) [0] r=0 lpr=97 pi=[76,97)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 97 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=95/76 les/c/f=96/77/0 sis=97) [0] r=0 lpr=97 pi=[76,97)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:56 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:17:57 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 11 09:17:57 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 11 09:17:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Dec 11 09:17:57 compute-1 ceph-mon[80018]: 12.1c scrub starts
Dec 11 09:17:57 compute-1 ceph-mon[80018]: 12.1c scrub ok
Dec 11 09:17:57 compute-1 ceph-mon[80018]: 9.3 scrub starts
Dec 11 09:17:57 compute-1 ceph-mon[80018]: 9.3 scrub ok
Dec 11 09:17:57 compute-1 ceph-mon[80018]: 11.7 scrub starts
Dec 11 09:17:57 compute-1 ceph-mon[80018]: 11.7 scrub ok
Dec 11 09:17:57 compute-1 ceph-mon[80018]: osdmap e97: 3 total, 3 up, 3 in
Dec 11 09:17:57 compute-1 ceph-mon[80018]: pgmap v117: 353 pgs: 2 unknown, 2 active+remapped, 349 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 1 objects/s recovering
Dec 11 09:17:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:57 compute-1 ceph-mon[80018]: Deploying daemon haproxy.rgw.default.compute-2.amjwbo on compute-2
Dec 11 09:17:57 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 98 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:57 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 98 pg[10.d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:57 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 98 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:57 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 98 pg[10.1d( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=78/78 les/c/f=79/79/0 sis=98) [0]/[1] r=-1 lpr=98 pi=[78,98)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:17:57 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 98 pg[10.c( v 56'1088 (0'0,56'1088] local-lis/les=97/98 n=5 ec=61/50 lis/c=95/76 les/c/f=96/77/0 sis=97) [0] r=0 lpr=97 pi=[76,97)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:57 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 98 pg[10.1c( v 56'1088 (0'0,56'1088] local-lis/les=97/98 n=7 ec=61/50 lis/c=95/76 les/c/f=96/77/0 sis=97) [0] r=0 lpr=97 pi=[76,97)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:17:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:57 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:17:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.003000083s ======
Dec 11 09:17:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:17:57.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000083s
Dec 11 09:17:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:57 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:58 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:58 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 11 09:17:58 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 11 09:17:58 compute-1 ceph-mon[80018]: 12.19 scrub starts
Dec 11 09:17:58 compute-1 ceph-mon[80018]: 12.19 scrub ok
Dec 11 09:17:58 compute-1 ceph-mon[80018]: 8.2 scrub starts
Dec 11 09:17:58 compute-1 ceph-mon[80018]: 8.2 scrub ok
Dec 11 09:17:58 compute-1 ceph-mon[80018]: 8.1b scrub starts
Dec 11 09:17:58 compute-1 ceph-mon[80018]: 8.1b scrub ok
Dec 11 09:17:58 compute-1 ceph-mon[80018]: osdmap e98: 3 total, 3 up, 3 in
Dec 11 09:17:58 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Dec 11 09:17:59 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 11 09:17:59 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 11 09:17:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:59 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 10.12 scrub starts
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 10.12 scrub ok
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 11.17 scrub starts
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 11.17 scrub ok
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 9.6 scrub starts
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 9.6 scrub ok
Dec 11 09:17:59 compute-1 ceph-mon[80018]: osdmap e99: 3 total, 3 up, 3 in
Dec 11 09:17:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 10.2 scrub starts
Dec 11 09:17:59 compute-1 ceph-mon[80018]: pgmap v120: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Dec 11 09:17:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:17:59 compute-1 ceph-mon[80018]: Deploying daemon keepalived.rgw.default.compute-2.ippkne on compute-2
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 10.2 scrub ok
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 8.c scrub starts
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 8.c scrub ok
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 8.10 scrub starts
Dec 11 09:17:59 compute-1 ceph-mon[80018]: 8.10 scrub ok
Dec 11 09:17:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:17:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:17:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:17:59.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:17:59 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Dec 11 09:17:59 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 100 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=8 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:59 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 100 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=8 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:59 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 100 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:17:59 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 100 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:17:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:17:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.002000055s ======
Dec 11 09:17:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:17:59.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec 11 09:17:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:17:59 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:00 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:00 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 11 09:18:00 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 11 09:18:00 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Dec 11 09:18:00 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 101 pg[10.d( v 56'1088 (0'0,56'1088] local-lis/les=100/101 n=8 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:00 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 101 pg[10.1d( v 56'1088 (0'0,56'1088] local-lis/les=100/101 n=5 ec=61/50 lis/c=98/78 les/c/f=99/79/0 sis=100) [0] r=0 lpr=100 pi=[78,100)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:00 compute-1 ceph-mon[80018]: osdmap e100: 3 total, 3 up, 3 in
Dec 11 09:18:00 compute-1 ceph-mon[80018]: 10.5 deep-scrub starts
Dec 11 09:18:00 compute-1 ceph-mon[80018]: 10.5 deep-scrub ok
Dec 11 09:18:00 compute-1 ceph-mon[80018]: 9.8 scrub starts
Dec 11 09:18:00 compute-1 ceph-mon[80018]: 9.8 scrub ok
Dec 11 09:18:00 compute-1 ceph-mon[80018]: 11.14 scrub starts
Dec 11 09:18:00 compute-1 ceph-mon[80018]: 11.14 scrub ok
Dec 11 09:18:01 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 11 09:18:01 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 11 09:18:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:01 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5640016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:01.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:01 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:01.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:01 compute-1 ceph-mon[80018]: osdmap e101: 3 total, 3 up, 3 in
Dec 11 09:18:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:01 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec 11 09:18:01 compute-1 ceph-mon[80018]: Deploying daemon keepalived.rgw.default.compute-0.gxvbmc on compute-0
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 10.18 scrub starts
Dec 11 09:18:01 compute-1 ceph-mon[80018]: pgmap v123: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 5 objects/s recovering
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 10.18 scrub ok
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 9.9 scrub starts
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 9.9 scrub ok
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 9.d scrub starts
Dec 11 09:18:01 compute-1 ceph-mon[80018]: 9.d scrub ok
Dec 11 09:18:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:01 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 11 09:18:02 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 11 09:18:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:02 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:03 compute-1 ceph-mon[80018]: 9.7 scrub starts
Dec 11 09:18:03 compute-1 ceph-mon[80018]: 9.7 scrub ok
Dec 11 09:18:03 compute-1 ceph-mon[80018]: 9.f scrub starts
Dec 11 09:18:03 compute-1 ceph-mon[80018]: 9.f scrub ok
Dec 11 09:18:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:03 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 11 09:18:03 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 11 09:18:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:03 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:03.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:03.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:03 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:04 compute-1 ceph-mon[80018]: pgmap v124: 353 pgs: 1 active+clean+scrubbing, 2 remapped+peering, 350 active+clean; 456 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 4 objects/s recovering
Dec 11 09:18:04 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:04 compute-1 ceph-mon[80018]: 11.e scrub starts
Dec 11 09:18:04 compute-1 ceph-mon[80018]: 11.e scrub ok
Dec 11 09:18:04 compute-1 ceph-mon[80018]: 10.e scrub starts
Dec 11 09:18:04 compute-1 ceph-mon[80018]: 10.e scrub ok
Dec 11 09:18:04 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:04 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:04 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc568003c50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 11 09:18:04 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 11 09:18:05 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Dec 11 09:18:05 compute-1 ceph-mon[80018]: Deploying daemon prometheus.compute-0 on compute-0
Dec 11 09:18:05 compute-1 ceph-mon[80018]: 11.3 scrub starts
Dec 11 09:18:05 compute-1 ceph-mon[80018]: 11.3 scrub ok
Dec 11 09:18:05 compute-1 ceph-mon[80018]: 10.a scrub starts
Dec 11 09:18:05 compute-1 ceph-mon[80018]: 10.a scrub ok
Dec 11 09:18:05 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec 11 09:18:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 11 09:18:05 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 11 09:18:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:05 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:05 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 11 09:18:06 compute-1 ceph-mon[80018]: pgmap v125: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:06 compute-1 ceph-mon[80018]: 8.d scrub starts
Dec 11 09:18:06 compute-1 ceph-mon[80018]: 8.d scrub ok
Dec 11 09:18:06 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 11 09:18:06 compute-1 ceph-mon[80018]: osdmap e102: 3 total, 3 up, 3 in
Dec 11 09:18:06 compute-1 ceph-mon[80018]: 10.9 scrub starts
Dec 11 09:18:06 compute-1 ceph-mon[80018]: 10.9 scrub ok
Dec 11 09:18:06 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 11 09:18:06 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:06 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.230341) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686230429, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1022, "num_deletes": 250, "total_data_size": 1660651, "memory_usage": 1685376, "flush_reason": "Manual Compaction"}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686242284, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1056110, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7248, "largest_seqno": 8265, "table_properties": {"data_size": 1051173, "index_size": 2269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12904, "raw_average_key_size": 20, "raw_value_size": 1040079, "raw_average_value_size": 1640, "num_data_blocks": 100, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444662, "oldest_key_time": 1765444662, "file_creation_time": 1765444686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11980 microseconds, and 4052 cpu microseconds.
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.242330) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1056110 bytes OK
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.242351) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.244023) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.244078) EVENT_LOG_v1 {"time_micros": 1765444686244067, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.244105) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1655011, prev total WAL file size 1655011, number of live WAL files 2.
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.245007) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1031KB)], [15(12MB)]
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686245115, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 13667890, "oldest_snapshot_seqno": -1}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3546 keys, 13239914 bytes, temperature: kUnknown
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686334695, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13239914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13210919, "index_size": 19029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8901, "raw_key_size": 91027, "raw_average_key_size": 25, "raw_value_size": 13140365, "raw_average_value_size": 3705, "num_data_blocks": 826, "num_entries": 3546, "num_filter_entries": 3546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444500, "oldest_key_time": 0, "file_creation_time": 1765444686, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.335123) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13239914 bytes
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.336823) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.3 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 12.0 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(25.5) write-amplify(12.5) OK, records in: 4067, records dropped: 521 output_compression: NoCompression
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.336848) EVENT_LOG_v1 {"time_micros": 1765444686336836, "job": 6, "event": "compaction_finished", "compaction_time_micros": 89717, "compaction_time_cpu_micros": 37716, "output_level": 6, "num_output_files": 1, "total_output_size": 13239914, "num_input_records": 4067, "num_output_records": 3546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686337172, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444686340042, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.244874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.340146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.340154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.340156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.340158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:18:06.340160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:18:06 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 11 09:18:07 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 11 09:18:07 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Dec 11 09:18:07 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 103 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=7 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=103 pruub=12.765971184s) [2] r=-1 lpr=103 pi=[85,103)/1 crt=56'1088 mlcod 0'0 active pruub 240.866348267s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:07 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 103 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=7 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=103 pruub=12.765752792s) [2] r=-1 lpr=103 pi=[85,103)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 240.866348267s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:07 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 103 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=5 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=103 pruub=12.765124321s) [2] r=-1 lpr=103 pi=[85,103)/1 crt=56'1088 mlcod 0'0 active pruub 240.866363525s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:07 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 103 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=5 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=103 pruub=12.765007973s) [2] r=-1 lpr=103 pi=[85,103)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 240.866363525s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:07 compute-1 ceph-mon[80018]: 10.3 scrub starts
Dec 11 09:18:07 compute-1 ceph-mon[80018]: 10.3 scrub ok
Dec 11 09:18:07 compute-1 ceph-mon[80018]: 10.6 scrub starts
Dec 11 09:18:07 compute-1 ceph-mon[80018]: 10.6 scrub ok
Dec 11 09:18:07 compute-1 ceph-mon[80018]: pgmap v127: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:07 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec 11 09:18:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:07 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:07.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:07 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 11 09:18:08 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 11 09:18:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:08 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:08 compute-1 ceph-mon[80018]: 10.19 scrub starts
Dec 11 09:18:08 compute-1 ceph-mon[80018]: 10.19 scrub ok
Dec 11 09:18:08 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 11 09:18:08 compute-1 ceph-mon[80018]: osdmap e103: 3 total, 3 up, 3 in
Dec 11 09:18:08 compute-1 ceph-mon[80018]: 10.b scrub starts
Dec 11 09:18:08 compute-1 ceph-mon[80018]: 10.b scrub ok
Dec 11 09:18:08 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Dec 11 09:18:08 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 104 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=7 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=104) [2]/[0] r=0 lpr=104 pi=[85,104)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:08 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 104 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=7 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=104) [2]/[0] r=0 lpr=104 pi=[85,104)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:08 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 104 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=5 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=104) [2]/[0] r=0 lpr=104 pi=[85,104)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:08 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 104 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=85/86 n=5 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=104) [2]/[0] r=0 lpr=104 pi=[85,104)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 11 09:18:09 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 11 09:18:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:09 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:09.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:09 compute-1 ceph-mon[80018]: osdmap e104: 3 total, 3 up, 3 in
Dec 11 09:18:09 compute-1 ceph-mon[80018]: pgmap v130: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:09 compute-1 ceph-mon[80018]: 10.10 scrub starts
Dec 11 09:18:09 compute-1 ceph-mon[80018]: 10.10 scrub ok
Dec 11 09:18:09 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Dec 11 09:18:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 105 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=104/105 n=7 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=104) [2]/[0] async=[2] r=0 lpr=104 pi=[85,104)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:09 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 105 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=104/105 n=5 ec=61/50 lis/c=85/85 les/c/f=86/86/0 sis=104) [2]/[0] async=[2] r=0 lpr=104 pi=[85,104)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:09.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:09 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 11 09:18:10 compute-1 ceph-osd[77625]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 11 09:18:10 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:10 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:10 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Dec 11 09:18:10 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 106 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=104/105 n=7 ec=61/50 lis/c=104/85 les/c/f=105/86/0 sis=106 pruub=14.993420601s) [2] async=[2] r=-1 lpr=106 pi=[85,106)/1 crt=56'1088 mlcod 56'1088 active pruub 246.335189819s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:10 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 106 pg[10.f( v 56'1088 (0'0,56'1088] local-lis/les=104/105 n=7 ec=61/50 lis/c=104/85 les/c/f=105/86/0 sis=106 pruub=14.993220329s) [2] r=-1 lpr=106 pi=[85,106)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 246.335189819s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:10 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 106 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=104/105 n=5 ec=61/50 lis/c=104/85 les/c/f=105/86/0 sis=106 pruub=14.996276855s) [2] async=[2] r=-1 lpr=106 pi=[85,106)/1 crt=56'1088 mlcod 56'1088 active pruub 246.338958740s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:10 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 106 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=104/105 n=5 ec=61/50 lis/c=104/85 les/c/f=105/86/0 sis=106 pruub=14.995888710s) [2] r=-1 lpr=106 pi=[85,106)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 246.338958740s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:10 compute-1 ceph-mon[80018]: osdmap e105: 3 total, 3 up, 3 in
Dec 11 09:18:10 compute-1 ceph-mon[80018]: 10.1b scrub starts
Dec 11 09:18:10 compute-1 ceph-mon[80018]: 10.1b scrub ok
Dec 11 09:18:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:11 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:11.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:11 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:11.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:11 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Dec 11 09:18:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:11 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:11 compute-1 ceph-mon[80018]: osdmap e106: 3 total, 3 up, 3 in
Dec 11 09:18:11 compute-1 ceph-mon[80018]: pgmap v133: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:12 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:12 compute-1 ceph-mon[80018]: osdmap e107: 3 total, 3 up, 3 in
Dec 11 09:18:12 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:12 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec 11 09:18:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:13 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:13.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:13.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:13 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:14 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc5840041f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:15 compute-1 ceph-mon[80018]: pgmap v135: 353 pgs: 2 unknown, 351 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  1: '-n'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  2: 'mgr.compute-1.unesvp'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  3: '-f'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  4: '--setuser'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  5: 'ceph'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr respawn  6: '--setgroup'
Dec 11 09:18:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setuser ceph since I am not root
Dec 11 09:18:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: ignoring --setgroup ceph since I am not root
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: pidfile_write: ignore empty --pid-file
Dec 11 09:18:15 compute-1 sshd-session[81786]: Connection closed by 192.168.122.100 port 54308
Dec 11 09:18:15 compute-1 sshd-session[81767]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 11 09:18:15 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Dec 11 09:18:15 compute-1 systemd[1]: session-34.scope: Consumed 21.293s CPU time.
Dec 11 09:18:15 compute-1 systemd-logind[791]: Session 34 logged out. Waiting for processes to exit.
Dec 11 09:18:15 compute-1 systemd-logind[791]: Removed session 34.
Dec 11 09:18:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:15 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003ea0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'alerts'
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'balancer'
Dec 11 09:18:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:15.478+0000 7fb837e02140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:15.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'cephadm'
Dec 11 09:18:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:15.569+0000 7fb837e02140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 11 09:18:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:15 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Dec 11 09:18:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:15 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 108 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=108 pruub=14.960214615s) [2] r=-1 lpr=108 pi=[61,108)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 251.479614258s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:15 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 108 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=108 pruub=14.960012436s) [2] r=-1 lpr=108 pi=[61,108)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 251.479614258s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:16 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Dec 11 09:18:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 109 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=109) [2]/[0] r=0 lpr=109 pi=[61,109)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:16 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 109 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=61/62 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=109) [2]/[0] r=0 lpr=109 pi=[61,109)/1 crt=56'1088 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:16 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:16 compute-1 ceph-mon[80018]: pgmap v136: 353 pgs: 353 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Dec 11 09:18:16 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec 11 09:18:16 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec 11 09:18:16 compute-1 ceph-mon[80018]: mgrmap e30: compute-0.wwpcae(active, since 2m), standbys: compute-1.unesvp, compute-2.uiimcn
Dec 11 09:18:16 compute-1 ceph-mon[80018]: from='mgr.14400 192.168.122.100:0/1063598679' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 11 09:18:16 compute-1 ceph-mon[80018]: osdmap e108: 3 total, 3 up, 3 in
Dec 11 09:18:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:16 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:16 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'crash'
Dec 11 09:18:16 compute-1 ceph-mgr[80326]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:18:16 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'dashboard'
Dec 11 09:18:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:16.446+0000 7fb837e02140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 11 09:18:16 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'devicehealth'
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'diskprediction_local'
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:17.184+0000 7fb837e02140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Dec 11 09:18:17 compute-1 ceph-mon[80018]: osdmap e109: 3 total, 3 up, 3 in
Dec 11 09:18:17 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 110 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=109/110 n=2 ec=61/50 lis/c=61/61 les/c/f=62/62/0 sis=109) [2]/[0] async=[2] r=0 lpr=109 pi=[61,109)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:17 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]:   from numpy import show_config as show_numpy_config
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:17.371+0000 7fb837e02140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'influx'
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:17.449+0000 7fb837e02140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'insights'
Dec 11 09:18:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:17.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'iostat'
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:17.606+0000 7fb837e02140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 11 09:18:17 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'k8sevents'
Dec 11 09:18:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:17 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003ea0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:18 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'localpool'
Dec 11 09:18:18 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mds_autoscaler'
Dec 11 09:18:18 compute-1 ceph-mon[80018]: osdmap e110: 3 total, 3 up, 3 in
Dec 11 09:18:18 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Dec 11 09:18:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 111 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=109/110 n=2 ec=61/50 lis/c=109/61 les/c/f=110/62/0 sis=111 pruub=14.972633362s) [2] async=[2] r=-1 lpr=111 pi=[61,111)/1 crt=56'1088 lcod 0'0 mlcod 0'0 active pruub 254.043228149s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:18 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 111 pg[10.10( v 56'1088 (0'0,56'1088] local-lis/les=109/110 n=2 ec=61/50 lis/c=109/61 les/c/f=110/62/0 sis=111 pruub=14.972487450s) [2] r=-1 lpr=111 pi=[61,111)/1 crt=56'1088 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 254.043228149s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:18 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:18 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'mirroring'
Dec 11 09:18:18 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'nfs'
Dec 11 09:18:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:18.765+0000 7fb837e02140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:18:18 compute-1 ceph-mgr[80326]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 11 09:18:18 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'orchestrator'
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:19.019+0000 7fb837e02140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_perf_query'
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:19.104+0000 7fb837e02140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'osd_support'
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:19.185+0000 7fb837e02140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'pg_autoscaler'
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:19.279+0000 7fb837e02140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'progress'
Dec 11 09:18:19 compute-1 ceph-mon[80018]: osdmap e111: 3 total, 3 up, 3 in
Dec 11 09:18:19 compute-1 sshd-session[85640]: Connection closed by 194.164.107.6 port 60132 [preauth]
Dec 11 09:18:19 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:19.357+0000 7fb837e02140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'prometheus'
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:19 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:19.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:19.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:19.729+0000 7fb837e02140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rbd_support'
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:19 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:19.836+0000 7fb837e02140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 11 09:18:19 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'restful'
Dec 11 09:18:20 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rgw'
Dec 11 09:18:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:20.309+0000 7fb837e02140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-1 ceph-mgr[80326]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'rook'
Dec 11 09:18:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:20 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:20 compute-1 ceph-mon[80018]: osdmap e112: 3 total, 3 up, 3 in
Dec 11 09:18:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:20.938+0000 7fb837e02140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-1 ceph-mgr[80326]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 11 09:18:20 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'selftest'
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:21.005+0000 7fb837e02140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'snap_schedule'
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:21.093+0000 7fb837e02140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'stats'
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'status'
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:21.256+0000 7fb837e02140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telegraf'
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:21.327+0000 7fb837e02140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'telemetry'
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:21 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003ea0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:21.494+0000 7fb837e02140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'test_orchestrator'
Dec 11 09:18:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:21.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:21 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:21.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:21.707+0000 7fb837e02140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 11 09:18:21 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'volumes'
Dec 11 09:18:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:21 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:22.077+0000 7fb837e02140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: mgr[py] Loading python module 'zabbix'
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 2025-12-11T09:18:22.175+0000 7fb837e02140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: mgr load Constructed class from module: dashboard
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: [11/Dec/2025:09:18:22] ENGINE Bus STARTING
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: CherryPy Checker:
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: The Application mounted at '' has an empty config.
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: 
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: mgr load Constructed class from module: prometheus
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: ms_deliver_dispatch: unhandled message 0x55db60873860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [prometheus INFO root] Starting engine...
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:22] ENGINE Bus STARTING
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [dashboard INFO root] Starting engine...
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [dashboard INFO root] Engine started...
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: [11/Dec/2025:09:18:22] ENGINE Serving on http://:::9283
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:22] ENGINE Serving on http://:::9283
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-mgr-compute-1-unesvp[80322]: [11/Dec/2025:09:18:22] ENGINE Bus STARTED
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [prometheus INFO cherrypy.error] [11/Dec/2025:09:18:22] ENGINE Bus STARTED
Dec 11 09:18:22 compute-1 ceph-mgr[80326]: [prometheus INFO root] Engine started.
Dec 11 09:18:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:22 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:22 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp restarted
Dec 11 09:18:22 compute-1 ceph-mon[80018]: Standby manager daemon compute-1.unesvp started
Dec 11 09:18:23 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Dec 11 09:18:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:23 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc564003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:23 compute-1 sshd-session[85668]: Accepted publickey for ceph-admin from 192.168.122.100 port 49580 ssh2: RSA SHA256:VeIx2NZka5hi0niQjHCLie+FE2InrWghbFhBMbpMPGo
Dec 11 09:18:23 compute-1 systemd-logind[791]: New session 36 of user ceph-admin.
Dec 11 09:18:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:23.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:23 compute-1 systemd[1]: Started Session 36 of User ceph-admin.
Dec 11 09:18:23 compute-1 sshd-session[85668]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 11 09:18:23 compute-1 sudo[85672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:23 compute-1 sudo[85672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:23 compute-1 sudo[85672]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:23 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc560003ea0 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:23 compute-1 ceph-mon[80018]: mgrmap e31: compute-0.wwpcae(active, since 2m), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:23 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn restarted
Dec 11 09:18:23 compute-1 ceph-mon[80018]: Standby manager daemon compute-2.uiimcn started
Dec 11 09:18:23 compute-1 ceph-mon[80018]: Active manager daemon compute-0.wwpcae restarted
Dec 11 09:18:23 compute-1 ceph-mon[80018]: Activating manager daemon compute-0.wwpcae
Dec 11 09:18:23 compute-1 ceph-mon[80018]: osdmap e113: 3 total, 3 up, 3 in
Dec 11 09:18:23 compute-1 ceph-mon[80018]: mgrmap e32: compute-0.wwpcae(active, starting, since 0.027586s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ejykhm"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.abebdg"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.hifxsh"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wwpcae", "id": "compute-0.wwpcae"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-2.uiimcn", "id": "compute-2.uiimcn"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr metadata", "who": "compute-1.unesvp", "id": "compute-1.unesvp"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: Manager daemon compute-0.wwpcae is now available
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/mirror_snapshot_schedule"}]: dispatch
Dec 11 09:18:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wwpcae/trash_purge_schedule"}]: dispatch
Dec 11 09:18:24 compute-1 sudo[85697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Dec 11 09:18:24 compute-1 sudo[85697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:24 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:24 compute-1 podman[85794]: 2025-12-11 09:18:24.630854702 +0000 UTC m=+0.059291323 container exec 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:18:24 compute-1 podman[85794]: 2025-12-11 09:18:24.725583287 +0000 UTC m=+0.154019908 container exec_died 4dc7a01fc77929a241692c9176e06ec9f7b5ebbb2b3ca54ee3e07c7a7ce020fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-crash-compute-1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec 11 09:18:25 compute-1 ceph-mon[80018]: mgrmap e33: compute-0.wwpcae(active, since 1.0456s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:25 compute-1 ceph-mon[80018]: pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:25 compute-1 podman[85913]: 2025-12-11 09:18:25.131105792 +0000 UTC m=+0.056708030 container exec 851fbde2323ef3f3948778a526539f2fee5bda7b4a505af3c7d6d952672edf2a (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:25 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Dec 11 09:18:25 compute-1 podman[85913]: 2025-12-11 09:18:25.162851142 +0000 UTC m=+0.088453400 container exec_died 851fbde2323ef3f3948778a526539f2fee5bda7b4a505af3c7d6d952672edf2a (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 11 09:18:25 compute-1 kernel: ganesha.nfsd[84830]: segfault at 50 ip 00007fc60eeea32e sp 00007fc594ff8210 error 4 in libntirpc.so.5.8[7fc60eecf000+2c000] likely on CPU 3 (core 0, socket 3)
Dec 11 09:18:25 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 11 09:18:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[84783]: 11/12/2025 09:18:25 : epoch 693a8bf3 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc578004810 fd 48 proxy ignored for local
Dec 11 09:18:25 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 11 09:18:25 compute-1 systemd[1]: Started Process Core Dump (PID 85998/UID 0).
Dec 11 09:18:25 compute-1 podman[86006]: 2025-12-11 09:18:25.459247729 +0000 UTC m=+0.050784794 container exec 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 11 09:18:25 compute-1 podman[86006]: 2025-12-11 09:18:25.474509257 +0000 UTC m=+0.066046302 container exec_died 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:25.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:25 compute-1 podman[86066]: 2025-12-11 09:18:25.731045567 +0000 UTC m=+0.073475850 container exec 84770fcf349b08bd38d1666648e2a6f98ee094683e2088d6e929ae3ede50a6ed (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay)
Dec 11 09:18:25 compute-1 podman[86066]: 2025-12-11 09:18:25.761763998 +0000 UTC m=+0.104194301 container exec_died 84770fcf349b08bd38d1666648e2a6f98ee094683e2088d6e929ae3ede50a6ed (image=quay.io/ceph/haproxy:2.3, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay)
Dec 11 09:18:26 compute-1 podman[86128]: 2025-12-11 09:18:26.040692526 +0000 UTC m=+0.082876724 container exec 41c18657fbf15d474ce33c6dd515a50eabf8b7f96b8ef7241f573dfb1b064354 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Dec 11 09:18:26 compute-1 podman[86128]: 2025-12-11 09:18:26.050359997 +0000 UTC m=+0.092544185 container exec_died 41c18657fbf15d474ce33c6dd515a50eabf8b7f96b8ef7241f573dfb1b064354 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-keepalived-nfs-cephfs-compute-1-aigyat, vendor=Red Hat, Inc., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, name=keepalived, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Dec 11 09:18:26 compute-1 ceph-mon[80018]: [11/Dec/2025:09:18:24] ENGINE Bus STARTING
Dec 11 09:18:26 compute-1 ceph-mon[80018]: [11/Dec/2025:09:18:24] ENGINE Serving on https://192.168.122.100:7150
Dec 11 09:18:26 compute-1 ceph-mon[80018]: [11/Dec/2025:09:18:24] ENGINE Client ('192.168.122.100', 33772) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 11 09:18:26 compute-1 ceph-mon[80018]: [11/Dec/2025:09:18:24] ENGINE Serving on http://192.168.122.100:8765
Dec 11 09:18:26 compute-1 ceph-mon[80018]: [11/Dec/2025:09:18:24] ENGINE Bus STARTED
Dec 11 09:18:26 compute-1 ceph-mon[80018]: pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:26 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec 11 09:18:26 compute-1 ceph-mon[80018]: mgrmap e34: compute-0.wwpcae(active, since 2s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:26 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 11 09:18:26 compute-1 ceph-mon[80018]: osdmap e114: 3 total, 3 up, 3 in
Dec 11 09:18:26 compute-1 sudo[85697]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:26 compute-1 sudo[86161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:26 compute-1 sudo[86161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:26 compute-1 sudo[86161]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:26 compute-1 sudo[86186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:18:26 compute-1 sudo[86186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:26 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:26 compute-1 sudo[86186]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:26 compute-1 sudo[86242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:26 compute-1 sudo[86242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:26 compute-1 sudo[86242]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:27 compute-1 sudo[86267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Dec 11 09:18:27 compute-1 sudo[86267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:27 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:27 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec 11 09:18:27 compute-1 sudo[86267]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:27 compute-1 systemd-coredump[86005]: Process 84787 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 45:
                                                   #0  0x00007fc60eeea32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Dec 11 09:18:27 compute-1 systemd[1]: systemd-coredump@0-85998-0.service: Deactivated successfully.
Dec 11 09:18:27 compute-1 systemd[1]: systemd-coredump@0-85998-0.service: Consumed 1.924s CPU time.
Dec 11 09:18:27 compute-1 podman[86314]: 2025-12-11 09:18:27.471115618 +0000 UTC m=+0.053672596 container died 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:27 compute-1 systemd[1]: var-lib-containers-storage-overlay-3445b3b4e2ab8984559e54c49073d5e3591dada623a1dfeaf1ecb09f7be42ff0-merged.mount: Deactivated successfully.
Dec 11 09:18:27 compute-1 podman[86314]: 2025-12-11 09:18:27.511604473 +0000 UTC m=+0.094161411 container remove 55ab6eb507694675ce8a6b970f906dac2a8d71d1d5e09b9113be5ed08bc7944c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec 11 09:18:27 compute-1 systemd[81771]: Starting Mark boot as successful...
Dec 11 09:18:27 compute-1 systemd[81771]: Finished Mark boot as successful.
Dec 11 09:18:27 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Main process exited, code=exited, status=139/n/a
Dec 11 09:18:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:27.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:27 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Failed with result 'exit-code'.
Dec 11 09:18:27 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Consumed 2.154s CPU time.
Dec 11 09:18:28 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Dec 11 09:18:28 compute-1 ceph-mon[80018]: pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:28 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:28 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec 11 09:18:28 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:28 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 11 09:18:28 compute-1 ceph-mon[80018]: osdmap e115: 3 total, 3 up, 3 in
Dec 11 09:18:28 compute-1 ceph-mon[80018]: mgrmap e35: compute-0.wwpcae(active, since 5s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:18:28 compute-1 sudo[86362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:18:28 compute-1 sudo[86362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:28 compute-1 sudo[86362]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:18:29 compute-1 sudo[86387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86387]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-1 sudo[86412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86412]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:29 compute-1 sudo[86437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86437]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-1 sudo[86462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86462]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-1 sudo[86510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86510]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new
Dec 11 09:18:29 compute-1 sudo[86535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86535]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 11 09:18:29 compute-1 sudo[86560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:29.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:29 compute-1 sudo[86560]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:18:29 compute-1 ceph-mon[80018]: Updating compute-0:/etc/ceph/ceph.conf
Dec 11 09:18:29 compute-1 ceph-mon[80018]: Updating compute-1:/etc/ceph/ceph.conf
Dec 11 09:18:29 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.conf
Dec 11 09:18:29 compute-1 ceph-mon[80018]: pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:29 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec 11 09:18:29 compute-1 sudo[86585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:29 compute-1 sudo[86585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86585]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:29 compute-1 sudo[86610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:29 compute-1 sudo[86610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86610]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:29 compute-1 sudo[86635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86635]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 sudo[86660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:29 compute-1 sudo[86660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86660]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:29 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Dec 11 09:18:29 compute-1 sudo[86685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:29 compute-1 sudo[86685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:29 compute-1 sudo[86685]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:30 compute-1 sudo[86733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86733]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new
Dec 11 09:18:30 compute-1 sudo[86758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86758]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-1 sudo[86783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86783]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 11 09:18:30 compute-1 sudo[86808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86808]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph
Dec 11 09:18:30 compute-1 sudo[86834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86834]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-1 sudo[86859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86859]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:30 compute-1 sudo[86884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86884]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-1 sudo[86909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86909]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-1 sudo[86957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86957]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[86982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new
Dec 11 09:18:30 compute-1 sudo[86982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[86982]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 sudo[87007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-1 sudo[87007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[87007]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:30 compute-1 ceph-mon[80018]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-1 ceph-mon[80018]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-1 ceph-mon[80018]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.conf
Dec 11 09:18:30 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 11 09:18:30 compute-1 ceph-mon[80018]: osdmap e116: 3 total, 3 up, 3 in
Dec 11 09:18:30 compute-1 ceph-mon[80018]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-1 ceph-mon[80018]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-1 ceph-mon[80018]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec 11 09:18:30 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Dec 11 09:18:30 compute-1 sudo[87032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:30 compute-1 sudo[87032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:30 compute-1 sudo[87032]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 sudo[87057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config
Dec 11 09:18:31 compute-1 sudo[87057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-1 sudo[87057]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 sudo[87082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:31 compute-1 sudo[87082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-1 sudo[87082]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 sudo[87107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:31 compute-1 sudo[87107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-1 sudo[87107]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 sudo[87132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:31 compute-1 sudo[87132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-1 sudo[87132]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Dec 11 09:18:31 compute-1 sudo[87180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:31 compute-1 sudo[87180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-1 sudo[87180]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 sudo[87205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new
Dec 11 09:18:31 compute-1 sudo[87205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-1 sudo[87205]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 sudo[87230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring.new /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-1 sudo[87230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:31 compute-1 sudo[87230]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:31.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:31 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:31.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:31 compute-1 ceph-mon[80018]: Updating compute-0:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-1 ceph-mon[80018]: Updating compute-1:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-1 ceph-mon[80018]: osdmap e117: 3 total, 3 up, 3 in
Dec 11 09:18:31 compute-1 ceph-mon[80018]: Updating compute-2:/var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/config/ceph.client.admin.keyring
Dec 11 09:18:31 compute-1 ceph-mon[80018]: pgmap v11: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 12 op/s
Dec 11 09:18:31 compute-1 ceph-mon[80018]: osdmap e118: 3 total, 3 up, 3 in
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:18:31 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Dec 11 09:18:33 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Dec 11 09:18:33 compute-1 ceph-mon[80018]: osdmap e119: 3 total, 3 up, 3 in
Dec 11 09:18:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091833 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:18:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:33.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:33.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:34 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091834 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:18:34 compute-1 ceph-mon[80018]: pgmap v14: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 18 op/s
Dec 11 09:18:34 compute-1 ceph-mon[80018]: osdmap e120: 3 total, 3 up, 3 in
Dec 11 09:18:35 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec 11 09:18:35 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Dec 11 09:18:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:35.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:35.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:36 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Dec 11 09:18:36 compute-1 ceph-mon[80018]: pgmap v16: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 244 B/s rd, 0 op/s; 52 B/s, 2 objects/s recovering
Dec 11 09:18:36 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 11 09:18:36 compute-1 ceph-mon[80018]: osdmap e121: 3 total, 3 up, 3 in
Dec 11 09:18:36 compute-1 ceph-mon[80018]: osdmap e122: 3 total, 3 up, 3 in
Dec 11 09:18:36 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 11 09:18:36 compute-1 sudo[87258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:18:36 compute-1 sudo[87258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:36 compute-1 sudo[87258]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:36 compute-1 sudo[87283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:18:36 compute-1 sudo[87283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:36 compute-1 sudo[87283]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:37 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Dec 11 09:18:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:37.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:37.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:37 compute-1 ceph-mon[80018]: Reconfiguring mon.compute-0 (monmap changed)...
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:37 compute-1 ceph-mon[80018]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 11 09:18:37 compute-1 ceph-mon[80018]: pgmap v19: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 212 B/s rd, 0 op/s; 45 B/s, 1 objects/s recovering
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec 11 09:18:37 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 11 09:18:37 compute-1 ceph-mon[80018]: osdmap e123: 3 total, 3 up, 3 in
Dec 11 09:18:37 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Scheduled restart job, restart counter is at 1.
Dec 11 09:18:37 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.vlrwzy for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:37 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Consumed 2.154s CPU time.
Dec 11 09:18:37 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.vlrwzy for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060...
Dec 11 09:18:38 compute-1 podman[87353]: 2025-12-11 09:18:38.003878346 +0000 UTC m=+0.048172811 container create 67901d62ebbffa9c8adaf2d28b9b62ff2946b775e677dff1a91af4a12ec02c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cf8d0156a3361c5d2cf3f40987c456f6c21e116fea02694d5f46d6b4c8aa3c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cf8d0156a3361c5d2cf3f40987c456f6c21e116fea02694d5f46d6b4c8aa3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cf8d0156a3361c5d2cf3f40987c456f6c21e116fea02694d5f46d6b4c8aa3c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cf8d0156a3361c5d2cf3f40987c456f6c21e116fea02694d5f46d6b4c8aa3c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.vlrwzy-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec 11 09:18:38 compute-1 podman[87353]: 2025-12-11 09:18:38.071255695 +0000 UTC m=+0.115550180 container init 67901d62ebbffa9c8adaf2d28b9b62ff2946b775e677dff1a91af4a12ec02c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid)
Dec 11 09:18:38 compute-1 podman[87353]: 2025-12-11 09:18:38.077570823 +0000 UTC m=+0.121865288 container start 67901d62ebbffa9c8adaf2d28b9b62ff2946b775e677dff1a91af4a12ec02c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec 11 09:18:38 compute-1 podman[87353]: 2025-12-11 09:18:37.98184265 +0000 UTC m=+0.026137165 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:38 compute-1 bash[87353]: 67901d62ebbffa9c8adaf2d28b9b62ff2946b775e677dff1a91af4a12ec02c16
Dec 11 09:18:38 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.vlrwzy for 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060.
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec 11 09:18:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:18:38 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-1 ceph-mon[80018]: Reconfiguring mgr.compute-0.wwpcae (monmap changed)...
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wwpcae", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:38 compute-1 ceph-mon[80018]: Reconfiguring daemon mgr.compute-0.wwpcae on compute-0
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:38 compute-1 ceph-mon[80018]: osdmap e124: 3 total, 3 up, 3 in
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:18:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:39 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Dec 11 09:18:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:39.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:40 compute-1 ceph-mon[80018]: Reconfiguring crash.compute-0 (monmap changed)...
Dec 11 09:18:40 compute-1 ceph-mon[80018]: Reconfiguring daemon crash.compute-0 on compute-0
Dec 11 09:18:40 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:40 compute-1 ceph-mon[80018]: pgmap v22: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:18:40 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:40 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec 11 09:18:40 compute-1 ceph-mon[80018]: Reconfiguring osd.1 (monmap changed)...
Dec 11 09:18:40 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec 11 09:18:40 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:40 compute-1 ceph-mon[80018]: Reconfiguring daemon osd.1 on compute-0
Dec 11 09:18:40 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 11 09:18:40 compute-1 ceph-mon[80018]: osdmap e125: 3 total, 3 up, 3 in
Dec 11 09:18:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:41.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:41 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:41 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:41 compute-1 ceph-mon[80018]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec 11 09:18:41 compute-1 ceph-mon[80018]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec 11 09:18:41 compute-1 ceph-mon[80018]: pgmap v24: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 848 B/s rd, 212 B/s wr, 1 op/s; 0 B/s, 1 objects/s recovering
Dec 11 09:18:41 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:41.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:43 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:43 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:43 compute-1 ceph-mon[80018]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec 11 09:18:43 compute-1 ceph-mon[80018]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec 11 09:18:43 compute-1 ceph-mon[80018]: pgmap v25: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 0 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:43.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:18:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:18:44 compute-1 sudo[87415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:44 compute-1 sudo[87415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:44 compute-1 sudo[87415]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:44 compute-1 sudo[87440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:44 compute-1 sudo[87440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:44 compute-1 podman[87482]: 2025-12-11 09:18:44.916794812 +0000 UTC m=+0.053853260 container create ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 11 09:18:44 compute-1 systemd[1]: Started libpod-conmon-ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a.scope.
Dec 11 09:18:44 compute-1 podman[87482]: 2025-12-11 09:18:44.89531958 +0000 UTC m=+0.032378028 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:44 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:18:45 compute-1 podman[87482]: 2025-12-11 09:18:45.022034152 +0000 UTC m=+0.159092600 container init ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 11 09:18:45 compute-1 podman[87482]: 2025-12-11 09:18:45.033258937 +0000 UTC m=+0.170317345 container start ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_boyd, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 11 09:18:45 compute-1 podman[87482]: 2025-12-11 09:18:45.036619221 +0000 UTC m=+0.173677679 container attach ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_boyd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec 11 09:18:45 compute-1 systemd[1]: libpod-ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a.scope: Deactivated successfully.
Dec 11 09:18:45 compute-1 dazzling_boyd[87498]: 167 167
Dec 11 09:18:45 compute-1 conmon[87498]: conmon ada3585243019f21dfcf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a.scope/container/memory.events
Dec 11 09:18:45 compute-1 podman[87482]: 2025-12-11 09:18:45.043541954 +0000 UTC m=+0.180600392 container died ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_boyd, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 11 09:18:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-4432e14f82dc22fea120132a9f4049d885f797d7f2b2af84dbdde11f00559b18-merged.mount: Deactivated successfully.
Dec 11 09:18:45 compute-1 podman[87482]: 2025-12-11 09:18:45.088957248 +0000 UTC m=+0.226015666 container remove ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_boyd, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:18:45 compute-1 systemd[1]: libpod-conmon-ada3585243019f21dfcf498ceb6b07c6b5b34d34e5f9318c1a5d24746394b22a.scope: Deactivated successfully.
Dec 11 09:18:45 compute-1 sudo[87440]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:45 compute-1 sudo[87515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:45 compute-1 sudo[87515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:45 compute-1 sudo[87515]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:45 compute-1 sudo[87540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:45 compute-1 sudo[87540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-1 ceph-mon[80018]: Reconfiguring crash.compute-1 (monmap changed)...
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:45 compute-1 ceph-mon[80018]: Reconfiguring daemon crash.compute-1 on compute-1
Dec 11 09:18:45 compute-1 ceph-mon[80018]: pgmap v26: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 924 B/s wr, 3 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:45 compute-1 ceph-mon[80018]: Reconfiguring osd.0 (monmap changed)...
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec 11 09:18:45 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:45 compute-1 ceph-mon[80018]: Reconfiguring daemon osd.0 on compute-1
Dec 11 09:18:45 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Dec 11 09:18:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:18:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:18:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:45.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:45 compute-1 podman[87581]: 2025-12-11 09:18:45.687263537 +0000 UTC m=+0.049379895 container create fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_brattain, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:45 compute-1 systemd[1]: Started libpod-conmon-fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965.scope.
Dec 11 09:18:45 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:18:45 compute-1 podman[87581]: 2025-12-11 09:18:45.752457664 +0000 UTC m=+0.114574032 container init fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_brattain, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:45 compute-1 podman[87581]: 2025-12-11 09:18:45.75766526 +0000 UTC m=+0.119781628 container start fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_brattain, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 11 09:18:45 compute-1 podman[87581]: 2025-12-11 09:18:45.761393775 +0000 UTC m=+0.123510153 container attach fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_brattain, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 11 09:18:45 compute-1 upbeat_brattain[87597]: 167 167
Dec 11 09:18:45 compute-1 systemd[1]: libpod-fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965.scope: Deactivated successfully.
Dec 11 09:18:45 compute-1 podman[87581]: 2025-12-11 09:18:45.76442905 +0000 UTC m=+0.126545408 container died fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_brattain, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec 11 09:18:45 compute-1 podman[87581]: 2025-12-11 09:18:45.670501407 +0000 UTC m=+0.032617795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-58c30ddc638097970f61ab5db85e431ba675b347513f5d91489ed666e181f509-merged.mount: Deactivated successfully.
Dec 11 09:18:45 compute-1 podman[87581]: 2025-12-11 09:18:45.797081904 +0000 UTC m=+0.159198282 container remove fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 11 09:18:45 compute-1 systemd[1]: libpod-conmon-fdae5f793d88d947e7797746fed491dc8bcdc736c691114d7530f8eae61f2965.scope: Deactivated successfully.
Dec 11 09:18:45 compute-1 sudo[87540]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:46 compute-1 sudo[87621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:18:46 compute-1 sudo[87621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:46 compute-1 sudo[87621]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:46 compute-1 sudo[87646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060
Dec 11 09:18:46 compute-1 sudo[87646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:46 compute-1 podman[87689]: 2025-12-11 09:18:46.41273931 +0000 UTC m=+0.049201009 container create 6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec 11 09:18:46 compute-1 systemd[1]: Started libpod-conmon-6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f.scope.
Dec 11 09:18:46 compute-1 podman[87689]: 2025-12-11 09:18:46.39062479 +0000 UTC m=+0.027086489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec 11 09:18:46 compute-1 systemd[1]: Started libcrun container.
Dec 11 09:18:46 compute-1 podman[87689]: 2025-12-11 09:18:46.50155311 +0000 UTC m=+0.138014839 container init 6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_herschel, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 11 09:18:46 compute-1 podman[87689]: 2025-12-11 09:18:46.508241127 +0000 UTC m=+0.144702806 container start 6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_herschel, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:46 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 11 09:18:46 compute-1 ceph-mon[80018]: osdmap e126: 3 total, 3 up, 3 in
Dec 11 09:18:46 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:46 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:46 compute-1 ceph-mon[80018]: Reconfiguring mon.compute-1 (monmap changed)...
Dec 11 09:18:46 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:46 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:46 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:46 compute-1 ceph-mon[80018]: Reconfiguring daemon mon.compute-1 on compute-1
Dec 11 09:18:46 compute-1 interesting_herschel[87706]: 167 167
Dec 11 09:18:46 compute-1 podman[87689]: 2025-12-11 09:18:46.513388031 +0000 UTC m=+0.149849740 container attach 6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 11 09:18:46 compute-1 systemd[1]: libpod-6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f.scope: Deactivated successfully.
Dec 11 09:18:46 compute-1 podman[87689]: 2025-12-11 09:18:46.513970657 +0000 UTC m=+0.150432336 container died 6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 11 09:18:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-f518cb359e5b1450dda464dd9403b8d7d741d86d2b249b9d4f471e45b3e995ab-merged.mount: Deactivated successfully.
Dec 11 09:18:46 compute-1 podman[87689]: 2025-12-11 09:18:46.555244924 +0000 UTC m=+0.191706613 container remove 6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 11 09:18:46 compute-1 systemd[1]: libpod-conmon-6315be6e84c9e8951102db9a94ab12d2a6b7bf7fac1df23e0a2124694ef0e24f.scope: Deactivated successfully.
Dec 11 09:18:46 compute-1 sudo[87646]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:46 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec 11 09:18:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:18:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:18:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:18:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:47.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-1 ceph-mon[80018]: Reconfiguring mon.compute-2 (monmap changed)...
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:47 compute-1 ceph-mon[80018]: Reconfiguring daemon mon.compute-2 on compute-2
Dec 11 09:18:47 compute-1 ceph-mon[80018]: pgmap v28: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 895 B/s wr, 2 op/s; 0 B/s, 0 objects/s recovering
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.uiimcn", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "mgr services"}]: dispatch
Dec 11 09:18:47 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:47 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Dec 11 09:18:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:47.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:48 compute-1 ceph-mon[80018]: Reconfiguring mgr.compute-2.uiimcn (monmap changed)...
Dec 11 09:18:48 compute-1 ceph-mon[80018]: Reconfiguring daemon mgr.compute-2.uiimcn on compute-2
Dec 11 09:18:48 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 11 09:18:48 compute-1 ceph-mon[80018]: osdmap e127: 3 total, 3 up, 3 in
Dec 11 09:18:48 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:48 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:48 compute-1 ceph-mon[80018]: Reconfiguring osd.2 (unknown last config time)...
Dec 11 09:18:48 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec 11 09:18:48 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:48 compute-1 ceph-mon[80018]: Reconfiguring daemon osd.2 on compute-2
Dec 11 09:18:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:49.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:49 compute-1 ceph-mon[80018]: pgmap v30: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 767 B/s wr, 2 op/s
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec 11 09:18:49 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:49 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Dec 11 09:18:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:49.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:49 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 128 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=7 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=128 pruub=11.648898125s) [1] r=-1 lpr=128 pi=[91,128)/1 crt=56'1088 mlcod 0'0 active pruub 282.097412109s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:49 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 128 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=7 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=128 pruub=11.648836136s) [1] r=-1 lpr=128 pi=[91,128)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 282.097412109s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec 11 09:18:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:50 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 11 09:18:50 compute-1 ceph-mon[80018]: osdmap e128: 3 total, 3 up, 3 in
Dec 11 09:18:50 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Dec 11 09:18:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 129 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=7 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=129) [1]/[0] r=0 lpr=129 pi=[91,129)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:50 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 129 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=91/92 n=7 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=129) [1]/[0] r=0 lpr=129 pi=[91,129)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:51 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:51.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:51 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.002000056s ======
Dec 11 09:18:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:51.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Dec 11 09:18:51 compute-1 ceph-mon[80018]: osdmap e129: 3 total, 3 up, 3 in
Dec 11 09:18:51 compute-1 ceph-mon[80018]: pgmap v33: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec 11 09:18:51 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec 11 09:18:51 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Dec 11 09:18:51 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 130 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=129/130 n=7 ec=61/50 lis/c=91/91 les/c/f=92/92/0 sis=129) [1]/[0] async=[1] r=0 lpr=129 pi=[91,129)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:51 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:52 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:52 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 11 09:18:52 compute-1 ceph-mon[80018]: osdmap e130: 3 total, 3 up, 3 in
Dec 11 09:18:52 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Dec 11 09:18:52 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 131 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=129/130 n=7 ec=61/50 lis/c=129/91 les/c/f=130/92/0 sis=131 pruub=14.981585503s) [1] async=[1] r=-1 lpr=131 pi=[91,131)/1 crt=56'1088 mlcod 56'1088 active pruub 288.495758057s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:52 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 131 pg[10.19( v 56'1088 (0'0,56'1088] local-lis/les=129/130 n=7 ec=61/50 lis/c=129/91 les/c/f=130/92/0 sis=131 pruub=14.981505394s) [1] r=-1 lpr=131 pi=[91,131)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 288.495758057s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091853 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:18:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:53 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:53 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:53 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:53.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:53 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.002000055s ======
Dec 11 09:18:53 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:53.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Dec 11 09:18:53 compute-1 ceph-mon[80018]: osdmap e131: 3 total, 3 up, 3 in
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:18:53 compute-1 ceph-mon[80018]: pgmap v36: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.5 KiB/s wr, 6 op/s
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec 11 09:18:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:18:53 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Dec 11 09:18:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:53 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 132 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=13.282785416s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=56'1088 mlcod 0'0 active pruub 288.308349609s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 132 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=132 pruub=13.282604218s) [1] r=-1 lpr=132 pi=[95,132)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 288.308349609s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091854 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:18:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:54 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:54 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 11 09:18:54 compute-1 ceph-mon[80018]: osdmap e132: 3 total, 3 up, 3 in
Dec 11 09:18:54 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Dec 11 09:18:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 133 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:54 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 133 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=95/96 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] r=0 lpr=133 pi=[95,133)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:18:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:55 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:55 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:55 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:55 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:55.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:55 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:55 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:55 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:55.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:55 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:55 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Dec 11 09:18:55 compute-1 ceph-mon[80018]: osdmap e133: 3 total, 3 up, 3 in
Dec 11 09:18:55 compute-1 ceph-mon[80018]: pgmap v39: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 1 objects/s recovering
Dec 11 09:18:55 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec 11 09:18:55 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 134 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=133/134 n=2 ec=61/50 lis/c=95/95 les/c/f=96/96/0 sis=133) [1]/[0] async=[1] r=0 lpr=133 pi=[95,133)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:18:56 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 135 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=133/134 n=2 ec=61/50 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=15.618486404s) [1] async=[1] r=-1 lpr=135 pi=[95,135)/1 crt=56'1088 mlcod 56'1088 active pruub 292.655517578s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:18:56 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 135 pg[10.1b( v 56'1088 (0'0,56'1088] local-lis/les=133/134 n=2 ec=61/50 lis/c=133/95 les/c/f=134/96/0 sis=135 pruub=15.618438721s) [1] r=-1 lpr=135 pi=[95,135)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 292.655517578s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:18:56 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:56 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:56 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:18:56 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 11 09:18:56 compute-1 ceph-mon[80018]: osdmap e134: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-1 ceph-mon[80018]: osdmap e135: 3 total, 3 up, 3 in
Dec 11 09:18:56 compute-1 sudo[87743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:18:56 compute-1 sudo[87743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:56 compute-1 sudo[87743]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Dec 11 09:18:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:57 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:57.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:18:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:57.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:18:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:57 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:57 compute-1 ceph-mon[80018]: pgmap v42: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s; 27 B/s, 1 objects/s recovering
Dec 11 09:18:57 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec 11 09:18:57 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 11 09:18:57 compute-1 ceph-mon[80018]: osdmap e136: 3 total, 3 up, 3 in
Dec 11 09:18:58 compute-1 sudo[87768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:18:58 compute-1 sudo[87768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:18:58 compute-1 sudo[87768]: pam_unix(sudo:session): session closed for user root
Dec 11 09:18:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:58 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:59 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:59 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:18:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:18:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:18:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:18:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:18:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:18:59.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:18:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:18:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86340016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:00 compute-1 ceph-mon[80018]: pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Dec 11 09:19:00 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec 11 09:19:00 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Dec 11 09:19:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:00 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:01 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 11 09:19:01 compute-1 ceph-mon[80018]: osdmap e137: 3 total, 3 up, 3 in
Dec 11 09:19:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:01 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c0089d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:01.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:01 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:01.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:01 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:02 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Dec 11 09:19:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 138 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=106/106 les/c/f=107/107/0 sis=138) [0] r=0 lpr=138 pi=[106,138)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:19:02 compute-1 ceph-mon[80018]: pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 196 B/s rd, 0 op/s; 21 B/s, 0 objects/s recovering
Dec 11 09:19:02 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec 11 09:19:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 137 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=5 ec=61/50 lis/c=83/83 les/c/f=84/84/0 sis=137 pruub=10.349140167s) [2] r=-1 lpr=137 pi=[83,137)/1 crt=56'1088 mlcod 0'0 active pruub 293.179748535s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:19:02 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 138 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=5 ec=61/50 lis/c=83/83 les/c/f=84/84/0 sis=137 pruub=10.347904205s) [2] r=-1 lpr=137 pi=[83,137)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 293.179748535s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:19:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:02 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:03 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 11 09:19:03 compute-1 ceph-mon[80018]: osdmap e138: 3 total, 3 up, 3 in
Dec 11 09:19:03 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Dec 11 09:19:03 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=106/106 les/c/f=107/107/0 sis=139) [0]/[2] r=-1 lpr=139 pi=[106,139)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:19:03 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=61/50 lis/c=106/106 les/c/f=107/107/0 sis=139) [0]/[2] r=-1 lpr=139 pi=[106,139)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 11 09:19:03 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 139 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=5 ec=61/50 lis/c=83/83 les/c/f=84/84/0 sis=139) [2]/[0] r=0 lpr=139 pi=[83,139)/1 crt=56'1088 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:19:03 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 139 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=83/84 n=5 ec=61/50 lis/c=83/83 les/c/f=84/84/0 sis=139) [2]/[0] r=0 lpr=139 pi=[83,139)/1 crt=56'1088 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 11 09:19:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:03.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:03.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:04 compute-1 ceph-mon[80018]: osdmap e139: 3 total, 3 up, 3 in
Dec 11 09:19:04 compute-1 ceph-mon[80018]: pgmap v49: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 175 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec 11 09:19:04 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Dec 11 09:19:04 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 140 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=139/140 n=5 ec=61/50 lis/c=83/83 les/c/f=84/84/0 sis=139) [2]/[0] async=[2] r=0 lpr=139 pi=[83,139)/1 crt=56'1088 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:19:04 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:04 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:05 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Dec 11 09:19:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 141 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=139/140 n=5 ec=61/50 lis/c=139/83 les/c/f=140/84/0 sis=141 pruub=15.001015663s) [2] async=[2] r=-1 lpr=141 pi=[83,141)/1 crt=56'1088 mlcod 56'1088 active pruub 300.887512207s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:19:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 141 pg[10.1e( v 56'1088 (0'0,56'1088] local-lis/les=139/140 n=5 ec=61/50 lis/c=139/83 les/c/f=140/84/0 sis=141 pruub=15.000961304s) [2] r=-1 lpr=141 pi=[83,141)/1 crt=56'1088 mlcod 0'0 unknown NOTIFY pruub 300.887512207s@ mbc={}] state<Start>: transitioning to Stray
Dec 11 09:19:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 141 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=139/106 les/c/f=140/107/0 sis=141) [0] r=0 lpr=141 pi=[106,141)/1 luod=0'0 crt=56'1088 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec 11 09:19:05 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 141 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=0/0 n=5 ec=61/50 lis/c=139/106 les/c/f=140/107/0 sis=141) [0] r=0 lpr=141 pi=[106,141)/1 crt=56'1088 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 11 09:19:05 compute-1 ceph-mon[80018]: osdmap e140: 3 total, 3 up, 3 in
Dec 11 09:19:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:05 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:05.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:05.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:05 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:06 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:06 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:06 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Dec 11 09:19:07 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:07 compute-1 ceph-osd[77625]: osd.0 pg_epoch: 142 pg[10.1f( v 56'1088 (0'0,56'1088] local-lis/les=141/142 n=5 ec=61/50 lis/c=139/106 les/c/f=140/107/0 sis=141) [0] r=0 lpr=141 pi=[106,141)/1 crt=56'1088 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 11 09:19:07 compute-1 ceph-mon[80018]: pgmap v51: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 404 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:07 compute-1 ceph-mon[80018]: osdmap e141: 3 total, 3 up, 3 in
Dec 11 09:19:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:07 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:07.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:07 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:08 compute-1 ceph-mon[80018]: osdmap e142: 3 total, 3 up, 3 in
Dec 11 09:19:08 compute-1 ceph-mon[80018]: pgmap v54: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 510 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:08 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:09 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:09 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:09.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:09 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:10 compute-1 ceph-mon[80018]: pgmap v55: 353 pgs: 1 activating+remapped, 1 remapped+peering, 351 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 5/221 objects misplaced (2.262%)
Dec 11 09:19:10 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:10 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:11.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:11.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:12 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:12 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:12 compute-1 ceph-mon[80018]: pgmap v56: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 146 B/s rd, 0 op/s; 15 B/s, 1 objects/s recovering
Dec 11 09:19:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:13 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:13.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:13.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:13 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:14 compute-1 ceph-mon[80018]: pgmap v57: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 0 op/s; 13 B/s, 1 objects/s recovering
Dec 11 09:19:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:14 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:15 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:15.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:15.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:15 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:16 compute-1 ceph-mon[80018]: pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 11 B/s, 1 objects/s recovering
Dec 11 09:19:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:16 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:17 compute-1 sudo[87803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:19:17 compute-1 sudo[87803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:17 compute-1 sudo[87803]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:17 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:17.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:18 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:18 compute-1 ceph-mon[80018]: pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 285 B/s rd, 0 op/s; 10 B/s, 0 objects/s recovering
Dec 11 09:19:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:19 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:19 compute-1 ceph-mon[80018]: pgmap v60: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 11 09:19:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:19.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:19.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:19 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:20 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:21 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:21.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:21 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:22 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:22 compute-1 ceph-mon[80018]: pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 9 B/s, 0 objects/s recovering
Dec 11 09:19:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:22 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:23 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:23.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.714569) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763714715, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2180, "num_deletes": 252, "total_data_size": 8826816, "memory_usage": 9075272, "flush_reason": "Manual Compaction"}
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 11 09:19:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:23.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763753096, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 5492298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8270, "largest_seqno": 10445, "table_properties": {"data_size": 5482595, "index_size": 6069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21769, "raw_average_key_size": 21, "raw_value_size": 5462297, "raw_average_value_size": 5334, "num_data_blocks": 266, "num_entries": 1024, "num_filter_entries": 1024, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444687, "oldest_key_time": 1765444687, "file_creation_time": 1765444763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 39197 microseconds, and 21354 cpu microseconds.
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.753787) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 5492298 bytes OK
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.754034) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.756087) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.756138) EVENT_LOG_v1 {"time_micros": 1765444763756129, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.756164) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 8816344, prev total WAL file size 8816344, number of live WAL files 2.
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.761338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(5363KB)], [18(12MB)]
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763761450, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18732212, "oldest_snapshot_seqno": -1}
Dec 11 09:19:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:23 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4033 keys, 14175666 bytes, temperature: kUnknown
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763952565, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14175666, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14143053, "index_size": 21431, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 102931, "raw_average_key_size": 25, "raw_value_size": 14063471, "raw_average_value_size": 3487, "num_data_blocks": 921, "num_entries": 4033, "num_filter_entries": 4033, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444500, "oldest_key_time": 0, "file_creation_time": 1765444763, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.952820) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14175666 bytes
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.954170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 98.0 rd, 74.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.2, 12.6 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(6.0) write-amplify(2.6) OK, records in: 4570, records dropped: 537 output_compression: NoCompression
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.954211) EVENT_LOG_v1 {"time_micros": 1765444763954200, "job": 8, "event": "compaction_finished", "compaction_time_micros": 191195, "compaction_time_cpu_micros": 51155, "output_level": 6, "num_output_files": 1, "total_output_size": 14175666, "num_input_records": 4570, "num_output_records": 4033, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763955427, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444763958572, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.761168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.958657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.958661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.958663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.958664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:23 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:19:23.958666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:19:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:24 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:24 compute-1 ceph-mon[80018]: pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:25 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:25 compute-1 ceph-mon[80018]: pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:19:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:25.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:25.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:25 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:26 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:27 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:27.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:27.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:28 compute-1 ceph-mon[80018]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:28 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:29 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:29.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:29 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:30 compute-1 ceph-mon[80018]: pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:30 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:31 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:31.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:31 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:32 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:32 compute-1 ceph-mon[80018]: pgmap v66: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:33 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:33 compute-1 ceph-mon[80018]: pgmap v67: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:33.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:33.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:33 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:34 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:34 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:35.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:35.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:36 compute-1 ceph-mon[80018]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:19:36 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091936 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:19:36 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:36 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:37 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:37 compute-1 sudo[87841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:19:37 compute-1 sudo[87841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:37 compute-1 sudo[87841]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:37 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:37.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:37 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:38 compute-1 ceph-mon[80018]: pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:39 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:39.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:39 compute-1 ceph-mon[80018]: pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:19:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:39.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:39 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:40 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:41 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:41.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:41.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:41 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:42 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:42 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:42 compute-1 ceph-mon[80018]: pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:43 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:43 compute-1 ceph-mon[80018]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:19:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:43.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:43 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:45 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:45.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:45.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:45 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:46 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:19:46 compute-1 ceph-mon[80018]: pgmap v73: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:19:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:46 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:47 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:47.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:47.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:48 compute-1 ceph-mon[80018]: pgmap v74: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:19:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:48 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:19:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:19:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:49.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:49.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002950 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:50 compute-1 ceph-mon[80018]: pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Dec 11 09:19:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f863c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:51 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:51 compute-1 ceph-mon[80018]: pgmap v76: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:19:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:51.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:51.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:51 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:52 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:52 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:19:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:52 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:53 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:53 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:53 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:53 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:53 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:53.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:53 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:54 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:54 compute-1 ceph-mon[80018]: pgmap v77: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:19:54 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:19:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:55 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:55 compute-1 ceph-mon[80018]: pgmap v78: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:19:55 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:55 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:55 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:55.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:55 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:55 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:55 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:55.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:55 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:56 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:56 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:56 compute-1 ceph-mon[80018]: mgrmap e36: compute-0.wwpcae(active, since 92s), standbys: compute-2.uiimcn, compute-1.unesvp
Dec 11 09:19:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:19:57 compute-1 sudo[87879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:19:57 compute-1 sudo[87879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:57 compute-1 sudo[87879]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:57 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:19:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:57.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:19:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:57.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:57 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:58 compute-1 ceph-mon[80018]: pgmap v79: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:19:58 compute-1 sudo[87904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:19:58 compute-1 sudo[87904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:58 compute-1 sudo[87904]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:58 compute-1 sudo[87929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:19:58 compute-1 sudo[87929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:19:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/091958 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:19:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:58 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:59 compute-1 sudo[87929]: pam_unix(sudo:session): session closed for user root
Dec 11 09:19:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:19:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:19:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:19:59.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:19:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:19:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:19:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:19:59.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:19:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:19:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:00 compute-1 ceph-mon[80018]: pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Dec 11 09:20:00 compute-1 ceph-mon[80018]: overall HEALTH_OK
Dec 11 09:20:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:00 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:01 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:01.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:20:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:01.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:20:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:01 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:02 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:02 compute-1 ceph-mon[80018]: pgmap v81: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 852 B/s wr, 2 op/s
Dec 11 09:20:02 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:02 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:02 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f86200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:03.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:04 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:20:04 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:20:04 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:04 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:04 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:20:04 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:20:04 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:20:04 compute-1 ceph-mon[80018]: pgmap v82: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 681 B/s wr, 2 op/s
Dec 11 09:20:04 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:04 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:05 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:20:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:20:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:05.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:05 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:05 compute-1 ceph-mon[80018]: pgmap v83: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 681 B/s wr, 2 op/s
Dec 11 09:20:06 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:06 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:07 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:07 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:07.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:07 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:08 compute-1 ceph-mon[80018]: pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:08 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:08 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:08 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:08 compute-1 sudo[87992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:20:08 compute-1 sudo[87992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:08 compute-1 sudo[87992]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:09 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:09 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:20:09 compute-1 ceph-mon[80018]: pgmap v85: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:09.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:09 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:10 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:10 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:20:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:11.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:20:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:11.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:12 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:12 compute-1 ceph-mon[80018]: pgmap v86: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:12 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:13 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:13 compute-1 ceph-mon[80018]: pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:13.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:13.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:13 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:14 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:15 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:15.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:15.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:15 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:16 compute-1 ceph-mon[80018]: pgmap v88: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:20:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:16 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:17 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:17 compute-1 sudo[88021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:20:17 compute-1 sudo[88021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:17 compute-1 sudo[88021]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:17.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:17.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:18 compute-1 ceph-mon[80018]: pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:18 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:19 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:19.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:19.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:19 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:20 compute-1 ceph-mon[80018]: pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:20:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:20 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:21 compute-1 sshd-session[71280]: Received disconnect from 38.102.83.179 port 57032:11: disconnected by user
Dec 11 09:20:21 compute-1 sshd-session[71280]: Disconnected from user zuul 38.102.83.179 port 57032
Dec 11 09:20:21 compute-1 sshd-session[71277]: pam_unix(sshd:session): session closed for user zuul
Dec 11 09:20:21 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Dec 11 09:20:21 compute-1 systemd[1]: session-19.scope: Consumed 8.959s CPU time.
Dec 11 09:20:21 compute-1 systemd-logind[791]: Session 19 logged out. Waiting for processes to exit.
Dec 11 09:20:21 compute-1 systemd-logind[791]: Removed session 19.
Dec 11 09:20:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:21 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:21 compute-1 ceph-mon[80018]: pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:21.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:21.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:21 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:22 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092022 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:20:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:22 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:23 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:23 compute-1 ceph-mon[80018]: pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:23.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:23.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:23 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:24 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:25 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:25.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:25.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:25 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:26 compute-1 ceph-mon[80018]: pgmap v93: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:20:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:26 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:27 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:27.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:27.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:28 compute-1 ceph-mon[80018]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:28 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:29 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:29.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:29.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:29 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:30 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:30 compute-1 ceph-mon[80018]: pgmap v95: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:31 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:31.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:31 compute-1 ceph-mon[80018]: pgmap v96: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:31.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:31 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:32 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:20:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:32 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:33 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:33.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:33.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:33 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:34 compute-1 ceph-mon[80018]: pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:20:34 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:34 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:20:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:20:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:35.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:35 compute-1 ceph-mon[80018]: pgmap v98: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:20:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:36 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:36 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:37 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:37 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:37 compute-1 sudo[88058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:20:37 compute-1 sudo[88058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:37 compute-1 sudo[88058]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:37 compute-1 ceph-mon[80018]: pgmap v99: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:20:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:37.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:37.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:37 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:20:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:39 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:39.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:39.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:39 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:40 compute-1 ceph-mon[80018]: pgmap v100: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 11 09:20:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:40 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:41 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:41 compute-1 ceph-mon[80018]: pgmap v101: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:41.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:41.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:41 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:42 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:42 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:43 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:43.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:43 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:44 compute-1 ceph-mon[80018]: pgmap v102: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092044 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:20:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:45 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:45.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:45.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:45 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:46 compute-1 ceph-mon[80018]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:20:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:46 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:47 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:47.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:47.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:48 compute-1 ceph-mon[80018]: pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:20:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:48 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:49 compute-1 ceph-mon[80018]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:20:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:49.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:49.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:51 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:51.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:51.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:51 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:52 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:52 compute-1 ceph-mon[80018]: pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:52 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:53 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:53 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:53 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:53 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:53 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:53.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:53 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:54 compute-1 ceph-mon[80018]: pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:54 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:20:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:54 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:55 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:55 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:55 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:55 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:55 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:55 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:55 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:55.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:55 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:56 compute-1 ceph-mon[80018]: pgmap v108: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:20:56 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:56 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:20:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:57 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:57 compute-1 sudo[88095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:20:57 compute-1 sudo[88095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:20:57 compute-1 sudo[88095]: pam_unix(sudo:session): session closed for user root
Dec 11 09:20:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:57.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:20:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:57.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:20:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:57 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:58 compute-1 ceph-mon[80018]: pgmap v109: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:20:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:58 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:20:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:20:59.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:20:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:20:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:20:59.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:20:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:20:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:00 compute-1 ceph-mon[80018]: pgmap v110: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:21:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:00 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:01 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:01.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:01.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:01 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:02 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:02 compute-1 ceph-mon[80018]: pgmap v111: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092102 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:21:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:02 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644003530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644003530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:03.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:04 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:04 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:04 compute-1 ceph-mon[80018]: pgmap v112: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:05 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:05 compute-1 ceph-mon[80018]: pgmap v113: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:05.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:05.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:05 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644003530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:06 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:06 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:07 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:07 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:07.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:07.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:07 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:08 compute-1 ceph-mon[80018]: pgmap v114: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:08 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644003530 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:08 compute-1 sudo[88126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:21:08 compute-1 sudo[88126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:08 compute-1 sudo[88126]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:08 compute-1 sudo[88151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:21:08 compute-1 sudo[88151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092108 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:21:09 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:09 compute-1 sudo[88151]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:09 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:09.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:09.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:09 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:10 compute-1 ceph-mon[80018]: pgmap v115: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:10 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:21:10 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:21:10 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:10 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:10 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:21:10 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:21:10 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:21:10 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:10 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:11.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:21:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:11.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:12 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:12 compute-1 ceph-mon[80018]: pgmap v116: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:12 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:13 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:13.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:13 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:14 compute-1 ceph-mon[80018]: pgmap v117: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:14 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:14 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:21:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:14 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:21:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:15 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:15 compute-1 ceph-mon[80018]: pgmap v118: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 11 09:21:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:15.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:15 compute-1 sudo[88211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:21:15 compute-1 sudo[88211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:15 compute-1 sudo[88211]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:15.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:15 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:16 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:16 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:16 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:21:17 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:17 compute-1 sudo[88237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:21:17 compute-1 sudo[88237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:17 compute-1 sudo[88237]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:17.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:17 compute-1 ceph-mon[80018]: pgmap v119: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Dec 11 09:21:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:21:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:17.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:18 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:19 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:19.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:19.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:19 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:20 compute-1 ceph-mon[80018]: pgmap v120: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Dec 11 09:21:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:20 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:20 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:21:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:21 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:21.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:21.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:21 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:22 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:22 compute-1 ceph-mon[80018]: pgmap v121: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Dec 11 09:21:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:22 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:23 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628004340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:21:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:23.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:21:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:23.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:23 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:24 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:21:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:24 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:21:24 compute-1 ceph-mon[80018]: pgmap v122: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Dec 11 09:21:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092124 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:21:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:24 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:25 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004ab0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:21:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:25.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:21:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:25.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:25 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:26 compute-1 ceph-mon[80018]: pgmap v123: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Dec 11 09:21:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:26 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:21:27 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:27.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:28 compute-1 ceph-mon[80018]: pgmap v124: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 11 09:21:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:28 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:29 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:29 compute-1 ceph-mon[80018]: pgmap v125: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Dec 11 09:21:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:29.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:29.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:29 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:30 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092130 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:21:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:31 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:31.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:21:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:31.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:21:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:31 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:32 compute-1 ceph-mon[80018]: pgmap v126: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 11 09:21:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:32 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:33 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:33 compute-1 systemd[81771]: Created slice User Background Tasks Slice.
Dec 11 09:21:33 compute-1 systemd[81771]: Starting Cleanup of User's Temporary Files and Directories...
Dec 11 09:21:33 compute-1 systemd[81771]: Finished Cleanup of User's Temporary Files and Directories.
Dec 11 09:21:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:33.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:33 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:34 compute-1 ceph-mon[80018]: pgmap v127: 353 pgs: 353 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Dec 11 09:21:34 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:34 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:35.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:36 compute-1 ceph-mon[80018]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 938 B/s wr, 6 op/s
Dec 11 09:21:36 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:36 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c009670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:37 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:37 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:37 compute-1 sudo[88275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:21:37 compute-1 sudo[88275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:37 compute-1 sudo[88275]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:37 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:37 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:37 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:37.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:37 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:38 compute-1 ceph-mon[80018]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:39 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:39 compute-1 ceph-mon[80018]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 170 B/s wr, 4 op/s
Dec 11 09:21:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:21:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:39.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:21:39 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:39 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:39 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:39 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:40 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:41 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:41.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:41 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:41 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:41 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:41.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:41 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:42 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:42 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:42 compute-1 ceph-mon[80018]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:43 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:43.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:43 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:43 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:43 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:43.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:44 compute-1 ceph-mon[80018]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:45 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:45 compute-1 ceph-mon[80018]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 3 op/s
Dec 11 09:21:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:45 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:45 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:45 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:45.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:46 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:46 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092146 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:21:47 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:47 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:47 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:47 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:47.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:48 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:48 compute-1 ceph-mon[80018]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:21:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:48 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:49 compute-1 ceph-mon[80018]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:21:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:49 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:49 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:49 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:50 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:50 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:51 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:51 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:51 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:51 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:21:51 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:21:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:52 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c003400 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:52 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:52 compute-1 ceph-mon[80018]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:52 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:52 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:21:53 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:53 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:53 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:53 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:53.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:53 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:54 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:54 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:54 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:54 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:54 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003e80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:54 compute-1 ceph-mon[80018]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Dec 11 09:21:55 compute-1 ceph-mon[80018]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:55 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:55 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:55 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:55 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:55.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:56 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:56 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:56 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:56 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:56 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:56 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:56 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:56 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:56 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec 11 09:21:57 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:21:57 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:57 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:57 compute-1 sudo[88312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:21:57 compute-1 sudo[88312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:21:57 compute-1 sudo[88312]: pam_unix(sudo:session): session closed for user root
Dec 11 09:21:57 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:57 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:57 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:57.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:58 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:58 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:58 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:21:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:58 compute-1 ceph-mon[80018]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Dec 11 09:21:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:58 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:58 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:58 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644003a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:21:59 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:21:59 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:21:59 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:21:59.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:21:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec 11 09:21:59 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:21:59 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:22:00 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:00 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:00 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:00 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:00 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec 11 09:22:00 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:00 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:00 compute-1 ceph-mon[80018]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Dec 11 09:22:01 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:01 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:01 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:01 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:01 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:01.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:01 compute-1 ceph-mon[80018]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:22:02 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:02 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:02 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:02 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:02 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:02 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:02 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec 11 09:22:03 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:03 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644003a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:03 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:03 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:03 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:03.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:04 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:04 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:22:04 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:22:04 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:04 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:04 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:04 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:04 compute-1 ceph-mon[80018]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Dec 11 09:22:05 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:05 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:05 compute-1 ceph-mon[80018]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Dec 11 09:22:05 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:05 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:05 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:05.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:06 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:06 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:06 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:06.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:06 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:06 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644002ca0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:06 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:06 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:07 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:07 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:07 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:07 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:07 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:07 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:07.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:08 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:08 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:08 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:08 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:08 compute-1 ceph-mon[80018]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:22:08 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:08 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:08 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092209 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec 11 09:22:09 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:09 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:09 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:09 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:09 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:09.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:10 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:10 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000028s ======
Dec 11 09:22:10 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:10.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Dec 11 09:22:10 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:10 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:10 compute-1 ceph-mon[80018]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 938 B/s wr, 3 op/s
Dec 11 09:22:10 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:10 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:11 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:11 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:11 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:11 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:11 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:11.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:12 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:12 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:12 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:12 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:12 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:12 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:12 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:12 compute-1 ceph-mon[80018]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:13 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:13 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:13 compute-1 ceph-mon[80018]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:13 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:13 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:13 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:13.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:14 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:14 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:14 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:14 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:14 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:14 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:15 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:15 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:15 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:15 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:15 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:16 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:16 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:16 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:16 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:16 compute-1 sudo[88346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 11 09:22:16 compute-1 sudo[88346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:16 compute-1 sudo[88346]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:16 compute-1 sudo[88371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Dec 11 09:22:16 compute-1 sudo[88371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:16 compute-1 ceph-mon[80018]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 426 B/s wr, 2 op/s
Dec 11 09:22:16 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:16 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:16 compute-1 sudo[88371]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:17 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:17 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:22:17 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec 11 09:22:17 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:17 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:17 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec 11 09:22:17 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec 11 09:22:17 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec 11 09:22:17 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:17 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:17 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:17 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:17 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:17.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:17 compute-1 sudo[88428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:22:17 compute-1 sudo[88428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:17 compute-1 sudo[88428]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:18 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:18 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:18 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:18.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:18 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:18 compute-1 ceph-mon[80018]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:22:18 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:18 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:19 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:19 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:19 compute-1 ceph-mon[80018]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Dec 11 09:22:19 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:19 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:19 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:19.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:20 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:20 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:20 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:20.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:20 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:20 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:20 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:21 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:21 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.606142) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941606293, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1892, "num_deletes": 252, "total_data_size": 5445682, "memory_usage": 5628280, "flush_reason": "Manual Compaction"}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941624152, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2217112, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10450, "largest_seqno": 12337, "table_properties": {"data_size": 2211440, "index_size": 2808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14336, "raw_average_key_size": 20, "raw_value_size": 2199055, "raw_average_value_size": 3084, "num_data_blocks": 125, "num_entries": 713, "num_filter_entries": 713, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444764, "oldest_key_time": 1765444764, "file_creation_time": 1765444941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 18193 microseconds, and 6768 cpu microseconds.
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.624334) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2217112 bytes OK
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.624374) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.626680) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.626697) EVENT_LOG_v1 {"time_micros": 1765444941626691, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.626722) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 5437142, prev total WAL file size 5437142, number of live WAL files 2.
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.628073) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323533' seq:0, type:0; will stop at (end)
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2165KB)], [21(13MB)]
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941628401, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16392778, "oldest_snapshot_seqno": -1}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4308 keys, 14582000 bytes, temperature: kUnknown
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941742573, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14582000, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14548956, "index_size": 21158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 108931, "raw_average_key_size": 25, "raw_value_size": 14466110, "raw_average_value_size": 3357, "num_data_blocks": 908, "num_entries": 4308, "num_filter_entries": 4308, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765444500, "oldest_key_time": 0, "file_creation_time": 1765444941, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e39b1f64-5981-400c-b4cc-531ee396f1c6", "db_session_id": "AQJDRPP5WSRURMC1H049", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.743151) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14582000 bytes
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.745361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 127.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 13.5 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(14.0) write-amplify(6.6) OK, records in: 4746, records dropped: 438 output_compression: NoCompression
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.745399) EVENT_LOG_v1 {"time_micros": 1765444941745383, "job": 10, "event": "compaction_finished", "compaction_time_micros": 114484, "compaction_time_cpu_micros": 40706, "output_level": 6, "num_output_files": 1, "total_output_size": 14582000, "num_input_records": 4746, "num_output_records": 4308, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941746171, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765444941749864, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.627941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.749998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.750009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.750011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.750013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:21 compute-1 ceph-mon[80018]: rocksdb: (Original Log Time 2025/12/11-09:22:21.750016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 11 09:22:21 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:21 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:21 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:21.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:22 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:22 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:22 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:22 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:22.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:22 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:22 compute-1 ceph-mon[80018]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:22 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:22 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:23 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:23 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:23 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:23 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:23 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:23 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:23.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:24 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:24 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:24 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:24 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:24 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:24 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:24 compute-1 ceph-mon[80018]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:25 compute-1 sudo[88458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 11 09:22:25 compute-1 sudo[88458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:25 compute-1 sudo[88458]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:25 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:25 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:25 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:25 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:25 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:25.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:26 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:26 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:26 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:26 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:26.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:26 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:26 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' 
Dec 11 09:22:26 compute-1 ceph-mon[80018]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:26 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:26 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:27 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:27 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:27 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8644004380 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:27 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:27 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:27 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:27.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:28 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:28 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:28 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:28 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:28.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:28 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:28 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:28 compute-1 ceph-mon[80018]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:29 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:29 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8620003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:29 compute-1 ceph-mon[80018]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:29 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:29 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:29 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:29.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:30 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:30 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:30 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:30 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:30.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:30 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:30 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:31 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:31 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:31 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:31 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:31 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:31.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:32 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:32 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:32 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:32 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:32.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:32 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:32 compute-1 ceph-mon[80018]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:32 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:32 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:33 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:33 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:33 compute-1 ceph-mon[80018]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:33 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:33 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:33 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:34 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:34 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:34 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:34 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:34 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:34.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:34 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:34 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:35 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:35 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:35 compute-1 ceph-mon[80018]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:35 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:35 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 11 09:22:35 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:35.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 11 09:22:36 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:36 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:36 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:36 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000026s ======
Dec 11 09:22:36 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:36.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Dec 11 09:22:36 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:36 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:37 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:37 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:37 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:38 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:38 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:38 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:38.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:38 compute-1 sudo[88491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Dec 11 09:22:38 compute-1 sudo[88491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 11 09:22:38 compute-1 sudo[88491]: pam_unix(sudo:session): session closed for user root
Dec 11 09:22:38 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:38 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:38 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:38.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:38 compute-1 ceph-mon[80018]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:38 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:38 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:38 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:39 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:39 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:40 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:40 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:40 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:40.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:40 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:40 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:40 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:40 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:40 compute-1 ceph-mon[80018]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:40 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:40 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:41 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:41 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:42 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:42 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:42 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:42.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:42 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:42 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:42 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:42 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:42.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:42 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:42 compute-1 ceph-mon[80018]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:42 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:42 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:43 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:43 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8634001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:43 compute-1 sshd-session[88519]: Accepted publickey for zuul from 192.168.122.10 port 40702 ssh2: ECDSA SHA256:cT48FffNzE3FSGRebchaTzw3hIqcCIoBfXY30Q2C9bc
Dec 11 09:22:43 compute-1 systemd-logind[791]: New session 37 of user zuul.
Dec 11 09:22:43 compute-1 systemd[1]: Started Session 37 of User zuul.
Dec 11 09:22:43 compute-1 sshd-session[88519]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 11 09:22:43 compute-1 sudo[88523]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 11 09:22:43 compute-1 sudo[88523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 11 09:22:44 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:44 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:44 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:44 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:44 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:44 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:44 compute-1 ceph-mon[80018]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:44 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:44 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:45 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:45 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:46 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:46 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:46 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:46.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:46 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:46 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:46 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:46 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:46 compute-1 ceph-mon[80018]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Dec 11 09:22:46 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:46 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:47 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:47 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:47 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8628002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:47 compute-1 ceph-mon[80018]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:48 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:48 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:48 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:48.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:48 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862000be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:48 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:48 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:48 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:48.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:48 compute-1 ovs-vsctl[88729]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 11 09:22:48 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:48 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f864c00a840 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec 11 09:22:49 compute-1 kernel: ganesha.nfsd[88631]: segfault at 50 ip 00007f86d47f032e sp 00007f86537fd210 error 4 in libntirpc.so.5.8[7f86d47d5000+2c000] likely on CPU 5 (core 0, socket 5)
Dec 11 09:22:49 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec 11 09:22:49 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy[87369]: 11/12/2025 09:22:49 : epoch 693a8c6e : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f862c002690 fd 39 proxy ignored for local
Dec 11 09:22:49 compute-1 systemd[1]: Started Process Core Dump (PID 88971/UID 0).
Dec 11 09:22:49 compute-1 lvm[89062]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 11 09:22:49 compute-1 lvm[89062]: VG ceph_vg0 finished
Dec 11 09:22:50 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:50 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:50 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:50.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:50 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:50 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:50 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:50.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:50 compute-1 ceph-mon[80018]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec 11 09:22:51 compute-1 systemd-coredump[88974]: Process 87373 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 70:
                                                   #0  0x00007f86d47f032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Dec 11 09:22:51 compute-1 systemd[1]: systemd-coredump@1-88971-0.service: Deactivated successfully.
Dec 11 09:22:51 compute-1 systemd[1]: systemd-coredump@1-88971-0.service: Consumed 1.890s CPU time.
Dec 11 09:22:51 compute-1 podman[89404]: 2025-12-11 09:22:51.582792229 +0000 UTC m=+0.031320343 container died 67901d62ebbffa9c8adaf2d28b9b62ff2946b775e677dff1a91af4a12ec02c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:22:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-55cf8d0156a3361c5d2cf3f40987c456f6c21e116fea02694d5f46d6b4c8aa3c-merged.mount: Deactivated successfully.
Dec 11 09:22:51 compute-1 podman[89404]: 2025-12-11 09:22:51.636501628 +0000 UTC m=+0.085029742 container remove 67901d62ebbffa9c8adaf2d28b9b62ff2946b775e677dff1a91af4a12ec02c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-nfs-cephfs-0-0-compute-1-vlrwzy, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec 11 09:22:51 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Main process exited, code=exited, status=139/n/a
Dec 11 09:22:51 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Failed with result 'exit-code'.
Dec 11 09:22:51 compute-1 systemd[1]: ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060@nfs.cephfs.0.0.compute-1.vlrwzy.service: Consumed 2.366s CPU time.
Dec 11 09:22:51 compute-1 crontab[89526]: (root) LIST (root)
Dec 11 09:22:52 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:52 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:52 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:52.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:52 compute-1 ceph-mon[80018]: mon.compute-1@2(peon).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 11 09:22:52 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:52 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:52 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:52.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:52 compute-1 ceph-mon[80018]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:53 compute-1 ceph-mon[80018]: from='mgr.14577 192.168.122.100:0/2102670525' entity='mgr.compute-0.wwpcae' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec 11 09:22:54 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:54 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:54 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:54.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:54 compute-1 systemd[1]: Starting Hostname Service...
Dec 11 09:22:54 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:54 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:54 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:54.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:54 compute-1 systemd[1]: Started Hostname Service.
Dec 11 09:22:54 compute-1 ceph-mon[80018]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec 11 09:22:55 compute-1 ceph-31aeaa1d-a3b6-5c37-8b0c-757ef5b8d060-haproxy-nfs-cephfs-compute-1-aifiay[85203]: [WARNING] 344/092255 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec 11 09:22:56 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:56 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.001000027s ======
Dec 11 09:22:56 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.102 - anonymous [11/Dec/2025:09:22:56.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Dec 11 09:22:56 compute-1 radosgw[83611]: ====== starting new request req=0x7f971f10f5d0 =====
Dec 11 09:22:56 compute-1 radosgw[83611]: ====== req done req=0x7f971f10f5d0 op status=0 http_status=200 latency=0.000000000s ======
Dec 11 09:22:56 compute-1 radosgw[83611]: beast: 0x7f971f10f5d0: 192.168.122.100 - anonymous [11/Dec/2025:09:22:56.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec 11 09:22:56 compute-1 ceph-mon[80018]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
